The Surge of ChatGPT in Academia

With the proliferation of artificial intelligence, it’s no surprise that this technology is making its way into classrooms. The most recent of these being ChatGPT, an AI-powered chatbot that can help students with their coursework, or in my case, with their university politics column ;)

by Polykum Redaktion

While AI-powered tools and associated ethical discussions have been around for several decades, the discourse on their use in classrooms, and especially at ETH, have only arisen in recent months. While they bring many benefits, it is also important to be mindful of the potential dangers that accompany their use.

Data and privacy concerns

Firstly, there are major privacy concerns: One should be wary when it comes to sharing personal information, especially when it involves AI-powered chatbots known for collecting and storing enormous amounts of data. The data privacy and security concerns apply not only to the chat, but also to the dataset the AI was trained on. ChatGPT, for example, was trained on 300 billion words mined from across the internet; not only data from scientific papers and blog posts, but also copyrighted and proprietary data and personal information acquired without consent, breaching contextual integrity without individuals being able to check the use of their data. When ChatGPT is used in academia, there is a risk for plagiarism and fraudulence. This applies to both students and lecturers who can’t be bothered to produce their own coursework.

Built-in biases

Furthermore AI-powered tools pose concerns about the quality of their answers. Like all AI systems, Chat GPT is only as good as the data it is trained on. If these are biased or discriminatory (as is the case on the internet), so will the answers. The same applies to accuracy. Some questions might be too complex and the nuances not captured by a simple answer, while others might just be wrong. The data on which ChatGPT is trained is not fact-checked, and there is a lot of misinformation circulating on the internet. Being a machine learning system based on complex algorithms, it is very difficult to hold a system like ChatGPT accountable for its errors and biases. If something goes wrong with the system, it may be difficult to identify the source of the problem or take steps to correct it, accelerating the spread of misinformation.

Enormous possibilities

That being said, one can’t forget the possibilities AI-powered tools offer. They can enhance students’ learning experience by providing additional support in the form of personalised help and feedback, and save time for both students and lecturers by automating routine tasks. They also affect academia in ways one might not think. By providing large amounts of information in a short time, they improve access to knowledge, baring the potential to democratise education and make it more available to individuals who may not have access to it otherwise. The hope is that this will lead to a more diverse and inclusive academic community, with a wider range of perspectives and experiences represented.

Instead of banning AI tools like ChatGPT out of fear, we should learn to embrace them and work with them. They should be used for exactly the tasks they excel at; research areas like natural language processing and fields like healthcare and climate science struggle with immense volumes of data with the need for pattern and trend recognition. Here, AI can help to uncover new insights and potential solutions to complex problems. Using them where they make sense, while keeping their limitations in mind, is key to unlocking their potential.


Léa Le Bars, 23, VSETH-HoPo board member, thought ChatGPT would write her university column, but ended up having major data privacy concerns and ultimately wrote the entire thing herself.

You may also like