Supporting the University of Edinburgh's commitments to digital skills, information literacy, and sharing knowledge openly

Tag: Student research

Computer Keyboard with an AI button lit up

Wikipedia at 24: Wikipedia and Artificial Intelligence

Sticky post

Wikipedia at 24

“With more than 250 million views each day, Wikipedia is an invaluable educational resource”.[1]

In light of Wikipedia turning 24 years this  (January 15th), and the Wikimedia residency at the University of Edinburgh turning 9 years old this week too, this post is to examine where we are with Wikipedia today in light of artificial intelligence and the ‘existential threat’ it poses to our knowledge ecosystem. Or not. We’ll see.

NB: This post is especially timely given also Keir Starmer’s focus on “unleashing Artificial Intelligence across the UK” on Monday[2][3] and our Principal’s championing of the University of Edinburgh as “a global centre for artificial intelligence excellence, with an emphasis on using AI for public good” this week.

Before we begin in earnest

Wikipedia has been, for some time, preferentialised in the top search results of Google, the number one search engine. And “search is the way we live now” (Darnton in Hillis, Petit & Jarrett, 2013, p.5)…. whether that stays the same remains to be seen with the emergence of chat-bots and ‘AI summary’ services. So it is incumbent on knowledge-generating academic institutions to support staff and students in navigating a robust information literacy when it comes to navigating the 21st century digital research skills necessary in the world today and understanding how knowledge is created, curated and disseminated online.

Engaging with Wikipedia in teaching & learning has helped us achieve these outcomes over the last nine years and supported thousands of learners to become discerning ‘open knowledge activists‘; better able to see the gaps in our shared knowledge and motivated to address these gaps especially when it comes to under-represented groups, topics, languages, histories. Better able also to discern reliable sources from unreliable sources, biased accounts from neutral point of view, copyrighted works from open access. Imbued with the critical thinking, academic referencing skills and graduate competencies any academic institution and employer would hope to see attained.

Further reading

Point 1: Wikipedia is already making use of machine learning

ORES

The Wikimedia Foundation has been using machine learning for years (since November 2015). ORES is a service that helps grade the quality of Wikipedia edits and evaluate changes made. Part of its function is to flag potentially problematic edits and bring them to the attention of human editors. The idea is that when you have as many edits to deal with as Wikipedia does, applying some means of filtering can make it easier to handle.

The important thing is that ORES itself does not make edits to Wikipedia, but synthesizes information and it is the human editors who decide how they act on that information” – Dr. Richard Nevell, Wikimedia UK

MinT

Rather relying entirely on external machine translation models (Google Translate, Yandex, Apertium, LingoCloud), Wikimedia also now has its own machine translation tool, MinT (Machine in Translation) (since July 2023) which is based on multiple state-of-the-art open source neural machine translation models [5] including (1) Meta’s NLLB-200 (2) Helsinki University’s OPUS (3)  IndicTrans2  (4)  Softcatalà.

The combined result of which is that more than 70 languages are now supported by MinT that are not supported by other services (including 27 languages for which there is no Wikipedia yet).[5]

“The translation models used by MinT support over 200 languages, including many underserved languages that are getting machine translation for the first time”.[6]

Machine translation is one application of AI or more accurately large language models that many readers may be familiar with. This aids with the translation of knowledge from one language to another, to build understanding between different languages and cultures. The English Wikipedia doesn’t allow for unsupervised machine translations to be added into its pages, but human editors are welcome to use these tools and add content. The key component is human supervision, with no unedited or unaltered machine translation permitted to be published on Wikipedia. We made use of the Content Translation tool on the Translation Studies MSc for the last eight years to give our students meaningful, practical published translation experience ahead of the world of work.

Point 2: Recent study finds artificial intelligence can aid Wikipedia’s verifiability

“It might seem ironic to use AI to help with citations, given how ChatGPT notoriously botches and hallucinates citations. But it’s important to remember that there’s a lot more to AI language models than chatbots….”[7]

SIDE – a potential use case

A study published in Nature Machine Intelligence in October 2023 demonstrated that the use of SIDE, a neural network based machine intelligence, could aid the verifiability of the references used in Wikipedia’s articles.[8] SIDE was trained using the references in existing ‘Featured Articles‘ on Wikipedia (the 8,000+ best quality articles on Wikipedia) to help flag citations where the citation was unlikely to support the statement or claim being made. Then SIDE would search the web for better alternative citations which would be better placed to support the claim being made in the article.

The paper’s authors observed that for the top 10% of citations tagged as most likely to be unverifiable by SIDE, human editors preferred the system’s suggested alternatives compared with the originally cited reference 70% of the time‘.[8]

What does this mean?

“Wikipedia lives and dies by its references, the links to sources that back up information in the online encyclopaedia. But sometimes, those references are flawed — pointing to broken websites, erroneous information or non-reputable sources.” [7]

This use case could, theoretically, save time for editors in checking the accuracy and verifiability of citations in articles BUT computational scientist at the University of Zurich, Aleksandra Urman, warns that this would only be if the system was deployed correctly and “what the Wikipedia community would find most useful”.[8]

Indeed, practical implementation and actual usefulness remains to be seen BUT the potential there is acknowledged by some within the Wikimedia and open education space:

“This is a powerful example of machine learning tools that can help scale the work of volunteers by efficiently recommending citations and accurate sources. Improving these processes will allow us to attract new editors to Wikipedia and provide better, more reliable information to billions of people around the world.” – Dr. Shani Everstein Sigalov, educator and Free Knowledge advocate.

One final note is that Urman pointed out that Wikipedia users testing the SIDE system were TWICE as likely to prefer neither of the references as they were to prefer the ones suggested by SIDE. So the human editor would still have to go searching for the relevant citation online in such instances.

Point 3: ChatGPT and Wikipedia

Do people trust ChatGPT more than Google Search and Wikipedia?

No, thankfully. A focus group and interview study published in 2024 revealed that not all users trust ChatGPT-generated information as much as Google Search and Wikipedia.[9]

Has the emergence and use of ChatGPT affected engagement with Wikipedia?

In November 2022, ChatGPT was released to the public and quickly became a popular source of information, serving as an effective question-answering resource. Early indications have suggested that it may be drawing users away from traditional question answering services.

A 2024 paper examined Wikipedia page visits, visitor numbers, number of edits and editor numbers in twelve Wikipedia languages. These metrics were compared with the numbers before and after the 30th of November 2022 when ChatGPT released. The paper’s authors also developed a panel regression model to better understand and quantify any differences. The paper concludes that while ChatGPT negatively impacted engagement in question-answering services such as StackOverflow, the same could not be said, as of yet, to Wikipedia. Indeed, there was little evidence of any impact on edits and editor numbers and any impact seems to have been extremely limited.[10]

Wikimedia CEO Maryana Iskander states,

“We have not yet seen a drop in page views on the Wikipedia platform since ChatGPT launched. We’re on it. we’re paying close attention, and we’re engaging, but also not freaking out, I would say.”[11]

Do Wikipedia editors think ChatGPT or other AI generators should be used for article creation?

“AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated.”[12]

Author of Should You Believe Wikipedia?: Online Communities and the Construction of Knowledge, Regents Professor Amy Bruckman states large language models are only as good as their ability to distinguish fact from fiction… so, in her view, they [LLMs] can be used to write content for Wikipedia BUT only ever as a first draft which can only be made useful if it is then edited by humans and the sources cited checked by humans also.[12]

Unreviewed AI generated content is a form of vandalism, and we can use the same techniques that we use for vandalism fighting on Wikipedia, to fight garbage coming from AI.” stated Bruckman.[12]

Wikimedia CEO Maryana Iskander agrees,

There are ways bad actors can find their way in. People vandalize pages, but we’ve kind of cracked the code on that, and often bots can be disseminated to revert vandalism, usually within seconds. At the foundation, we’ve built a disinformation team that works with volunteers to track and monitor.[11]

For the Wikipedia community’s part, a draft policy setting out the limits of usage of artificial intelligence on Wikipedia in article generation has been written to help editors avoid any copyright violations being posted on a open-licensed Wikipedia page or anything that might open Wikipedia volunteers up to libel suits. While at the same time the Wikimedia Foundation’s developers are creating tools to aid Wikipedia editors to better identify content online that has been written by AI bots. Part of this is also the greater worry that it is the digital news media, more than Wikipedia, that may be more prone to AI-generated content and it is these hitherto reliable news sources that Wikipedia editors would like to cite normally.

“I don’t think we can tell people ‘don’t use it’ because it’s just not going to happen. I mean, I would put the genie back in the bottle, if you let me. But given that that’s not possible, all we can do is to check it.”[12]

As what is right or wrong or missing on Wikipedia spreads across the internet then the need to ensure there are enough checks and balances and human supervision to avoid AI-generated garbage being replicated on Wikipedia and then spreading to other news sources and other AI services means we might be in a continuous ‘garbage-in-garbage-out’ spiral to the bottom that Wikimedia Sweden‘s John Cummings termed the Habsburg AI Effect (i.e. a degenerative ‘inbreeding’ of knowledge, consuming each other in a death loop, getting progressively and demonstrably worse and more ill each time) at the annual Wikimedia conference in August 2024. Despite Wikipedia and Google’s interdependence, the Wikipedia community itself is unsure it wants to enter any kind of unchecked feedback loop with ChatGPT whereby OpenAI consumes Wikipedia’s free content to train its models to then feed into other commercial paywalled sites when ChatGPT’s erroneous ‘hallucinations’ might have been feeding, in turn, into Wikipedia articles.

It is true to say that while Jimmy Wales has expressed his reluctance to see ChatGPT used as yet (“It has a tendency to just make stuff up out of thin air which is just really bad for Wikipedia — that’s just not OK. We’ve got to be really careful about that.”)[13] other Wikipedia editors have expressed their willingness to use it get past the inertia and “activation energy” of the first couple of paragraphs of a new article and, with human supervision (or humans as Wikipedia’s “special sauce”, if you will), this could actually help Wikipedia create greater numbers of quality articles to better reach its aim of becoming the ‘sum of all knowledge’.[14]

One final suggestion posted on the Wikipedia mailing list has been the use of the BLOOM large language model which makes use of Responsible AI Licences (RAIL)[15]

“Similar to some versions of the open Creative Commons license, the RAIL license enables flexible use of the AI model while also imposing some restrictions—for example, requiring that any derivative models clearly disclose that their outputs are AI-generated, and that anything built off them abide by the same rules.”[12]

A Wikimedia Foundation spokesperson stated that,

Based on feedback from volunteers, we’re looking into how these models may be able to help close knowledge gaps and increase knowledge access and participation. However, human engagement remains the most essential building block of the Wikimedia knowledge ecosystem. AI works best as an augmentation for the work that humans do on our project.”[12]

Point 4: How Wikipedia can shape the future of AI

WikiAI?

In Alek Tarkowski’s 2023 thought piece he views the ‘existential challenge’ of AI models becoming the new gatekeepers of knowledge (and potentially replacing Wikipedia) as an opportunity for Wikipedia to think differently and develop its own WikiAI, “not just to protect the commons from exploitation. The goal also needs to be the development of approaches that support the commons in a new technological context, which changes how culture and knowledge are produced, shared, and used.”[16] However, in discussion at Wikimania in August 2024, this was felt to be outwith the realms of possibility given the vast resources and financing this would require to get off the ground if tackled unilaterally by the Foundation.

Blacklisting and Attribution?

For Chris Albon, Machine Learning Director at the Wikimedia Foundation, using AI tools has been part of the work of some volunteers since 2002. [17] What’s new is that there may be more sites online using AI-generated content. However, Wikipedia has existing practice of blacklisting sites/sources once it has become clear they are no longer reliable. More concerning is the emerging disconnect whereby AI models can provide ‘summary’ answers to questions without linking to Wikipedia or providing attribution that the information is coming from Wikipedia.

Without clear attribution and links to the original source from which information was obtained, AI applications risk introducing an unprecedented amount of misinformation into the world. Users will not be able to easily distinguish between accurate information and hallucinations. We have been thinking a lot about this challenge and believe that the solution is attribution.”[17]

Gen-Z?

For Slate writer, Stephen Harrison, while a significant number of Wikipedia contributors are already gen Z (about 20% of Wikipedia editors are aged 18-24 according to a 2022 survey) there is a clear desire to increase this percentage within the Wikipedia community, not least to ensure the continuing relevance of Wikipedia within the knowledge ecosystem.[18] I.e. if Wikipedia becomes reduced to mere ‘training data’ for AI models then who would want to continue editing Wikipedia and who would want to learn to edit to carry on the mantle when older editors dwindle away? Hence, recruiting more younger editors from generation Z and raising their awareness of how widely Wikipedia content is used across the internet and how they can derive a sense of community and a shared purpose from sharing fact-checked knowledge, plugging gaps and being part of something that feels like a world-changing endeavour.[18]

WikiProject AI Cleanup

There is already an existing project is already clamping down on AI content on Wikipedia, according to Jiji Veronica Kim[19] Volunteer editors on the project are making use of the help of AI detecting tools to:

  • Identify AI generates texts, images.
  • Remove any unsourced claims
  • Remove any posts that do not comply with Wiki policies.

“The purpose of this project is not to restrict or ban the use of AI in articles, but to verify that its output is acceptable and constructive, and to fix or remove it otherwise….In other words, check yourself before you wreck yourself.“.[19]

Point 5: Wikipedia as a knowledge destination and the internet’s conscience

Edinburgh Award participants sat around a boardroom table in the shadow of Edinburgh castle at final showcase event, CC-BY-SA by Ewan McAndrew, University of Edinburgh

“Digital Volunteering with Wikipedia” – the Edinburgh Award

We know that many students are involved in activities alongside their studies such as volunteering, part-time work, and getting involved in the University community.

To help these activities to stand out from the crowd, our University has a new Award for “Digital Volunteering with Wikipedia to sit beside other available Edinburgh Awards– the Edinburgh Award is a programme that allows students to get official recognition for their involvement in extracurricular activities and demonstrate their digital capabilities to employers.

Edinburgh Award participants sat around a boardroom table in the shadow of Edinburgh castle at final showcase event, CC-BY-SA by Ewan McAndrew, University of Edinburgh

Edinburgh Award participants sat around a boardroom table in the shadow of Edinburgh castle at final showcase event, CC-BY-SA by Ewan McAndrew, University of Edinburgh

There are many different types of Edinburgh Award activity students can undertake but Digital Volunteering with Wikipedia focuses on developing 3 Graduate Attributes (e.g. digital literacy, written communication, assertiveness & confidence etc.) over the course of 55-80 hours of work and providing evidence of demonstrable learning, reflection and impact. These hours are staggered over the October to end of March period punctuated by 3 main mandatory “input” sessions.

In the first, Aspiring, in October the students self -assess themselves against the Graduate Attributes and select three to develop as part of the award. They also select a topic area of Wikipedia they wish to improve and submit a 400 word action plan for how they plan to develop their chosen Graduate Attributes and how they’ll deliver impact.

Once they have had training and researched their topic areas, the 2nd Input Session, Developing, in late December, requires them to re-assess if their Graduate Attribute ranking has changed, and submit a completed Fortnightly Log of Activities designed to evidence their work to date and their reflections on how they are progressing towards their personal project goals. We hold fortnightly group research sessions in the library (because not everything is online) to help their research and allow them to edit in  a social and supportive environment where they can ask questions and seek help; both from the Wikimedian in Residence, and from each other. 

Example student project on Francophone Literature, CC-BY-SA by Ewan McAndrew, University of Edinburgh

The final Input Session, Owning, is about coming together to share their project outcomes and reflections as well as ensuring the students get the opportunity to tie all this in with their future goals and how they will communicate about their Edinburgh Award experience to their peers, academic advisors or employers. This session takes place at end of March and their final submissions are an 800 word report or 3-6 minute video presentation reflecting on both their impact achieved and the development achieved in their 3 chosen graduate attributes.

Topics suggested by students to improve online

More interestingly, are the topics the students wanted to write about. Climate change, Covid-19, LGBT History, Black History, Women artists, Women in STEM. Marginalised groups, underrepresented topics, some of the biggest and most pressing challenges in the world today. This shows me that students recognise and are intrinsically motivated by the importance of addressing knowledge gaps and improving the world around them.

Here’s a short video of an example project on LGBTQ+ history and women of the MENA region:

 The final 10

We started in October with a large cohort off 44 interested students but this reduced to 10 by Input 3 but this was to be expected and is in line with other Edinburgh Award programmes similarly asking students to undergo over 55 hours in extracurricular volunteering.

These ten ‘knowledge activist’ heroes have been put forward to achieving the Award this year. 

The projects

  1. Witch hunting: past and present day
  2. Visual culture: Artworks depicting Edinburgh
  3. Francophone literature
  4. Plant pathology
  5. Buddhism and Artificial Intelligence
  6. LGBTQ history and women in the MENA region
  7. Byzantium Greece and Cavafy’s poetry
  8. International development and human rights
  9. History of menstruation
  10. Northumberland Folklore and coverage of Edinburgh related artists, banks, and writers by using museum exhibits 

The outcomes

76,000 words have so far been added to Wikipedia and over 876 references to pages viewed almost 3 million times already! 

41 articles created, 157 improved, 35 images uploaded and articles translated in German, Spanish, Greek, Bulgarian and French including the accused Bavarian witch Anna Maria Schwegelin (translated from German Wikipedia) and Crime of Solidarity (translated from French Wikipedia) which is a concept coined in France by human right’s activists in order to fight against organised illegal immigration networks as well as fight against laws that prevent refuge for refugees.

Reflections on the Edinburgh Award. CC-BY-SA by Ewan McAndrew

Notable new pages also include:

Here’s a short video of an example project on the history of menstruation:

Here’s a short video of an example project on Witch hunting (past and present):

Quotes from the students

“During the Wikipedia project, Critical Thinking skills were crucial to ensure the information presented was accurate, unbiased, and relevant. As the research progressed, I noticed that my skill improved as I had to analyse and evaluate the information gathered. One of the key improvements in the skills was the ability to identify and evaluate different sources of information. Initially, I relied heavily on a few sources for my research, but as the project progressed, I began considering a wider range of sources. I made an effort to evaluate each source based on its credibility, relevance, and objectivity, which helped me to identify and include the most accurate information.”

“I think that I have helped improve information accessibility on Wikipedia, as one of the most widely used free encyclopaedias I have felt it important to fill gaps in information largely concerning the LGBTQ community and women, as both of these areas are often forgotten about. I think having access to marginalised communities stories, achievements and contributions is a really important value, by contributing to these topics I have hopefully made information available to people around the world.”

“Once, I completed my second article I felt more self-assured and assertive on what was appropriate writing to upload onto Wikipedia. I had created an article on one of Cavafy’s poems, which is one of my favourite poems from his anthology. That could’ve also been a contributor to the overall experience too, since producing something which engages with one of your likes makes the activity a little more bearable. As I overcame this barrier, I was able to expand as well as develop my skills by editing as well as creating a lot more articles on Wikipedia. As it stands right now, I have contributed 10k words on Wikipedia. Although the first half of this process was excruciatingly slow, after overcoming my fears and worries I was keener with contributing on Wikipedia and practically spent most days changing, improving, or producing articles. ”

“Being a part of writing communities like Wikipedia has helped me to improve not only my writing but also my editing and proofreading skills. I have learned to use plain language, avoid jargon and technical terms, and organise information logically and coherently, thanks to Wikipedia’s style guidelines.”

“Doing this award has helped me make significant progress made on improving my independent research skills. For example, I think that over the course of my project, I have become better at picking out relevant information from very long sources and not spending too much time reading and fussing over smaller less significant details. In addition, I am more proficient at finding sources through Google Scholar and DiscoverEd and have also learnt where to look when struggling to find more information about a topic e.g. using good quality sources referenced in the bibliographies of journals and books I had already found to help grow my source lists.”

“Overall, my confidence to make bolder edits and create quality articles on Wikipedia has grown significantly since I started my project and I now feel that I can have a more significant and active presence on the site. Editing and writing articles about witch-hunting has been incredibly enlightening and rewarding and I want to continue to edit about this important topic after I finish the award.”

“My digital literacy skills have greatly improved compared with when I started my Wikipedia research project. Since the project involved extensive online research, it required me to engage with a wide range of digital tools and technologies. Through this process, I have developed proficiency in various areas of digital literacy, such as information literacy, media literacy, and digital communication.”

“My Wikipedia project on the history of menstruation has had a positive impact on others in several ways. Firstly, the project has helped raise awareness and understanding of an often-overlooked aspect of women’s health and history. By providing accurate and accessible information on the history of menstruation, the project has helped to demystify a topic that has long been stigmatized and taboo. I corrected a key part of the history of menstrual cups, which were first patented in the US in 1867, whereas before the article only included that the first patent for a commercial cup was in 1937. My article on Menstruation and humoral medicine has filled a gap in the content on Wikipedia, and highlighted the ambiguities in the ways that people viewed menstruation in the early modern period.”

“I wrote an article on Mary Marjory MacDonald, and significantly edited articles on Edwin Chiloba and the Signares. Mary was nicknamed ‘the Scottish Queen of Thieves’, and I believe it is important to represent more women on Wikipedia, especially figures who do not fit into traditional gender roles. This is also the case for the Signares, who were a group of women who acquired wealth and power in colonial Senegal. In addition, representing African LGBTQ+ activists such as Edwin Chiloba is important, since they are a group often neglected on Wikipedia.”

“I decided to focus on creating new pages to maximise my impact as some very important parts of the history of Francophone literature were missing, such as The Colonial System Unveiled, one of the earliest critiques of colonialism, which is unfortunately not recognised widely enough as a significant historical anticolonial text. I also decided to emphasise the contributions of women to Francophone Caribbean and African writing, as they can be overlooked in this area.”

“I hope that my contributions can help other students like me, such as those studying French or taking the course that inspired me to pursue this project. On a wider level, I also think my project can help increase the awareness of Francophone literature among English speakers. I believe it is very much underappreciated and people do not realise how much influence Francophone African and Caribbean thought have had on literary criticism even in an Anglophone context.”

Here’s a short video of an example project on improving topic coverage of Francophone literature:

Here’s a short video of an example project on improving topic coverage of artworks depicting Edinburgh:

In conclusion

Example student project researching artists,writers and banks related to Edinburgh. CC-BY-SA by Ewan McAndrew, University of Edinburgh

There are no stars so lovely as Edinburgh street lamps“.

Robert Louis Stevenson’s words (below) are inscribed in Makar’s Court, Edinburgh. In taking this photo and sharing it openly to Wikimedia Commons and inserting it into the Makar’s Court page, the Edinburgh Award student has brought these words to my attention and helped raise my awareness that there are clearly other lovely stars in Edinburgh. Ten student stars in particular. And I have told them that they should all be enormously proud of their achievements this year.

Inscription of Robert Louis Stevenson quote in Makars’ Court, CC-BY-SA by Erisagal via Wikimedia Commons

One final student project!

Here’s a short video of an example project on improving topic coverage of plant pathology on Wikipedia:

Reflections on a Wikipedia assignment – Reproductive Medicine

Wikipedia as an important source of health information and not medical advice.

“The Internet, especially Wikipedia, had proven its importance in everyday life. Even the medical sector is influenced by Wikipedia’s omnipresence. It has gained considerable attention among both healthcare professionals and the lay public in providing medical information. Patients rely on the information they obtain from Wikipedia before deciding to seek professional help. As a result, physicians are confronted by a professional dilemma as patients weigh information provided by medical professionals against that on Wikipedia, the new provider of health information….

We state that Wikipedia should not be viewed as being inappropriate for its use in medical education. Given Wikipedia’s central role in medical education as reported in our survey, its integration could yield new opportunities in undergraduate education. High-quality medical education and sustainability necessitates the need to know how to search and retrieve unbiased, comprehensive, and reliable information. Students should therefore be advised in reflected information search and encouraged to contribute to the “perpetual beta” improving Wikipedia’s reliability. Therefore, we ask for inclusion in medical curricula, since guiding students’ use and evaluation of information resources is an important role of higher education. It is of utmost importance to establish information literacy, evidence-based practices, and life-long learning habits among future physicians early on, hereby contributing to medical education of the highest quality.
Accordingly, this is an appeal to see Wikipedia as what it is: an educational opportunity. This is an appeal to academic educators for supplementing Wikipedia entries with credible information from the scientific literature. They also should teach their protégés to obtain and critically evaluate information as well as to supplement or correct entries. Finally, this is an appeal to medical students to develop professional responsibility while working with this dynamic resource. Criticism should be maintained and caution exercised since every user relies on the accuracy, conscientiousness, and objectivity of the contributor.” (Herbert et al, BMC Medical Education, 2015)

Reproductive Medicine Wikipedia assignment at Edinburgh University – September 2016

Reproductive Medicine undergraduates - collaborating to create Wikipedia articles.

Reproductive Medicine undergraduates – collaborating to create Wikipedia articles.

In September 2016, Reproductive Biology Honours students undertook a group research project to research, in groups of 4-5 students with a tutor, a term from reproductive biomedicine that was not yet represented on Wikipedia. All 38 were trained to edit Wikipedia and they worked collaboratively both to undertake the research and produce the finished written article. The assignment developed the students’ information literacy, digital literacy, collaborative working, academic writing & referencing and ability to communicate to an audience. The end result was 8 new articles on reproductive medicine which enriches the global open knowledge community and will be added to & improved upon long after they have left university creating a rich legacy to look back upon.

One of the new articles, high-grade serous carcinoma, was researched and written by 4th year student, Áine Kavanagh.

High-grade serous carcinoma - new Wikipedia article researched and written by Áine Kavanagh, in September 2016.

Rather than a writing an assignment for an audience of one, the course tutor, and never read again, Aine’s article can be viewed, built on & expanded by an audience of millions. Since creating the article in September 2016, the article has now been viewed 2196 times.

Pageviews for the High Grade Serous Carcinoma

Pageviews for the High Grade Serous Carcinoma

Guest post:

Reflections on a Wikipedia assignment

by Áine Kavanagh.
Reproductuve Medicine students - September 2016

Reproductive Medicine students – September 2016

The process of writing a Wikipedia article involved me trying to answer the questions I was asking myself about the topic. What was it? Why should I care about it? What does it mean to society? I also needed to make the answers to those questions clear to other people who can’t see inside my head.

It then moved onto questions I thought other people might ask about the topic. Writing for Wikipedia is really an exercise in empathy and perspective. Who else is going to want to know about this and what might they be interested in about it?

Is what I’m writing accessible and understandable? Am I presenting it in a useful way? It’s an incredibly public piece of writing which is only useful if it serves the public, so trying to put yourself in the frame of someone who’s not you reading what you’ve written is important (and possibly the most difficult part).

It’s also about co-operation from the get-go. You can’t post a Wikipedia article and allow no one else to edit it. You are offering something up to the world. You can always come back to it, but you can never make it completely your own again. The beauty of Wikipedia is in groupthink, in the crowd intelligence it facilitates, but this means shared ownership, which can be hard to get your head around at first.

It’s a unique way of writing, and some tips for other students starting out on a Wikipedia project is to not be intimidated. Wikipedia articles in theory can be indefinitely long and dense and will be around for an indefinitely long time, so writing a few hundred words can seem like adding a grain of sand to a desert. But if the information is not already there then you are contributing – and what is Wikipedia if not just a big bunch of contributions?

There’s also the fear that editors already on Wikipedia will swoop down and denounce your article as completely useless – but the beauty of storing information is that you can never really have too much of it. There’s no-one who can truly judge what is and isn’t worthy of knowing*.

*There’s no-one who can judge what’s worth knowing, but the sum of human knowledge needs to be organised, and so there are actually guidelines as to what a Wikipedia article is (objective account of a thing) and is not (platform for self-promotion).

Áine Kavanagh

Powered by WordPress & Theme by Anders Norén