• Home
  • VSETH
    • Präsi-Kolumne
    • HOPO-KOLUMNE
  • ETH World
  • Stories
  • Extras
    • Quiz
    • Cartoons
  • Issues
    • Intelligence
    • Artificial
    • Motion
    • Magic
    • Foreigner
    • Optimism
    • Realism
  • About
    • Advertising
    • Join Us
    • Contact
  • Subscribe
HOPO-KOLUMNEHotVSETH

The Surge of ChatGPT in Academia

With the proliferation of artificial intelligence, it’s no surprise that this technology is making its way into classrooms. The most recent of these being ChatGPT, an AI-powered chatbot that can help students with their coursework, or in my case, with their university politics column ;)

by Polykum Redaktion March 16, 2023
written by Polykum Redaktion March 16, 2023 563 views
0 0
Read Time:2 Minute, 53 Second

While AI-powered tools and associated ethical discussions have been around for several decades, the discourse on their use in classrooms, and especially at ETH, have only arisen in recent months. While they bring many benefits, it is also important to be mindful of the potential dangers that accompany their use.

Data and privacy concerns

Firstly, there are major privacy concerns: One should be wary when it comes to sharing personal information, especially when it involves AI-powered chatbots known for collecting and storing enormous amounts of data. The data privacy and security concerns apply not only to the chat, but also to the dataset the AI was trained on. ChatGPT, for example, was trained on 300 billion words mined from across the internet; not only data from scientific papers and blog posts, but also copyrighted and proprietary data and personal information acquired without consent, breaching contextual integrity without individuals being able to check the use of their data. When ChatGPT is used in academia, there is a risk for plagiarism and fraudulence. This applies to both students and lecturers who can’t be bothered to produce their own coursework.

Built-in biases

Furthermore AI-powered tools pose concerns about the quality of their answers. Like all AI systems, Chat GPT is only as good as the data it is trained on. If these are biased or discriminatory (as is the case on the internet), so will the answers. The same applies to accuracy. Some questions might be too complex and the nuances not captured by a simple answer, while others might just be wrong. The data on which ChatGPT is trained is not fact-checked, and there is a lot of misinformation circulating on the internet. Being a machine learning system based on complex algorithms, it is very difficult to hold a system like ChatGPT accountable for its errors and biases. If something goes wrong with the system, it may be difficult to identify the source of the problem or take steps to correct it, accelerating the spread of misinformation.

Enormous possibilities

That being said, one can’t forget the possibilities AI-powered tools offer. They can enhance students’ learning experience by providing additional support in the form of personalised help and feedback, and save time for both students and lecturers by automating routine tasks. They also affect academia in ways one might not think. By providing large amounts of information in a short time, they improve access to knowledge, baring the potential to democratise education and make it more available to individuals who may not have access to it otherwise. The hope is that this will lead to a more diverse and inclusive academic community, with a wider range of perspectives and experiences represented.

Instead of banning AI tools like ChatGPT out of fear, we should learn to embrace them and work with them. They should be used for exactly the tasks they excel at; research areas like natural language processing and fields like healthcare and climate science struggle with immense volumes of data with the need for pattern and trend recognition. Here, AI can help to uncover new insights and potential solutions to complex problems. Using them where they make sense, while keeping their limitations in mind, is key to unlocking their potential.

 

Léa Le Bars, 23, VSETH-HoPo board member, thought ChatGPT would write her university column, but ended up having major data privacy concerns and ultimately wrote the entire thing herself.

Share

Facebook
Twitter
Pinterest
LinkedIn

About Post Author

Polykum Redaktion

Happy
Happy
0 0 %
Sad
Sad
0 0 %
Excited
Excited
0 0 %
Sleepy
Sleepy
0 0 %
Angry
Angry
0 0 %
Surprise
Surprise
0 0 %
Polykum Redaktion

previous post
A Look into the Future
next post
Tame the Flame!

You may also like

The Absurd Myths behind VSETH

December 16, 2024

What even is IA

May 27, 2024

Coding Weekend

May 27, 2024

Polykum im Wandel der Zeit

May 27, 2024

Craft your own Pallet Sofa

May 27, 2024

Word of the President

May 27, 2024

The Importance of European Engagement for Students at...

April 22, 2024

Help us make ETH even better

April 22, 2024

Mensa Price Increase

March 18, 2024

How’s ETH going?

March 18, 2024

VSETH

  • The Absurd Myths behind VSETH

    December 16, 2024
  • What even is IA

    May 27, 2024
  • Coding Weekend

    May 27, 2024
  • Polykum im Wandel der Zeit

    May 27, 2024

ETH World

  • People of ETH

    December 16, 2024
  • Von der ETH in den Weltraum

    December 16, 2024
  • The fight against the tuition fees: the end or just the beginning?

    December 16, 2024
  • Anti-Realismus an der ETH Zürich?

    December 16, 2024

Footer Logo
  • Imprint
  • Privacy Policy
  • Print Archive
  • How to Subscribe

© Copyrights 2024 Polykum - All Rights Reserved.

  • Home
  • VSETH
    • Präsi-Kolumne
    • HOPO-KOLUMNE
  • ETH World
  • Stories
  • Extras
    • Quiz
    • Cartoons
  • Issues
    • Intelligence
    • Artificial
    • Motion
    • Magic
    • Foreigner
    • Optimism
    • Realism
  • About
    • Advertising
    • Join Us
    • Contact
  • Subscribe