AI in higher education: 'Banning it won't work'

AI in higher education: 'Banning it won't work'

Marine Ernault / IJL – Réseau.Presse – La Voix acadienne

Philosopher Frederic Bruno, a professor at the Faculty of Information at the University of Quebec in Montreal, who is also a researcher at the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology, discusses the advantages and disadvantages of generative artificial intelligence (AI) in higher education.

When Wikipedia arrived, there were many discussions in the academic world. Teachers were concerned about students copying the electronic encyclopedia. Do you see any similarity to current concerns?

It's always a debate that comes up. But, compared to Wikipedia, we are on another level, talking about tools that, with a simple request, can generate original text. In my opinion, generative AI systems like ChatGPT are more disruptive to student development and learning.

Specifically, should we ban the use of ChatGPT at post-secondary institutions or consider it an educational tool?

Simply blocking it won't work, because it is extremely difficult, if not impossible, to detect content generated by generative AI.

There are tools that have been developed, but it is not an exact science. The best they can do is give a probability rate that the content was written by generative AI. It is an endless mystery, and we cannot prove with certainty that it is plagiarism.

This does not mean that post-secondary institutions should not have clear policies. On the contrary, some form of supervision is necessary. They must successfully navigate the fine line between the possibilities offered by generative AI and its risks.




Researcher Frédéric Bruno explains that in the face of artificial intelligence, the way students are assessed during their higher education must be rethought.

Photo: Courtesy



Artificial intelligence makes it possible to do a lot of educationally interesting things. It can facilitate preparation for a course or test. We can imagine making it a teaching material and teaching, in the classroom, the power and limitations of the tool.

Policies will not solve all problems either. Colleges and universities must also take a preventive approach.

The entire academic community, students, staff, and professors must be informed and appropriately trained in these new tools. They should know how they work, and what ethical issues they raise. The idea is not to make them experts, but to improve their understanding and knowledge, and to develop a common culture of academic integrity.

To what extent is student work produced with the help of AI still their own creation?

The answer is still very difficult. If a student asks a relatively simple question on ChatGPT and copies and pastes the answer provided, this is clearly inappropriate use and could be classified as plagiarism.

But there are many other, more subtle uses. The gray area is very large. If a student creates a text using AI, and then edits it or adds or removes references, at what point does it become their own text? Higher education must address this problem.

Meanwhile, should the increasing presence of artificial intelligence lead to a review of the content and format of exams?

The arrival of artificial intelligence represents a fairly fundamental challenge to the way things had been done up until then. For centuries, we have relied heavily on the production of written texts to assess skills and learning. In the future, without completely eliminating them, we must rethink this type of evaluation, as they can no longer occupy as much space as before.

See also  Projeto eólico Apuiat: briefings BAPE em outubro

You May Also Like

About the Author: Octávio Florencio

"Evangelista zumbi. Pensador. Criador ávido. Fanático pela internet premiado. Fanático incurável pela web."

Leave a Reply

Your email address will not be published. Required fields are marked *