Copyright issues that arise from using artificial intelligence tools in academic works should be addressed, a professor specializing in cyberpsychology said at a human-AI interaction summit yesterday.
The remark comes after Baptist University announced the launch of a new journal on computers in human behavior, which can be a reference for governments to regulate the use of AI tools in the future.
Speaking at the summit, local universities said they would consider changing modes of teaching after University of Science and Technology granted staff full authority to set their standards for using AI chatbot ChatGPT last week.
HKUST provost Guo Yike said that, instead of relying on ChatGPT to finish coursework, students should engage with the chatbot.
"It's not about how to make machines dumb, but it's about how to make humans more clever," Guo said.
However, Chinese University announced earlier that students would need permission to use AI tools for assignments.
HKU and HKBU said that they would accept students using AI tools for coursework, but presenting it as original would be plagiarism.
Matthieu Guitton, a professor at Laval University in Canada and editor-in-chief of the new journal, Computers in Human Behavior: Artificial Humans, said that with AI-driven tools, institutes should instead test students' capacity to manage knowledge.
"We are no longer testing students' capacity to write, but we are testing their capacity to think further," Guitton said.
When asked if the deep learning of AI tools raises concerns about copyright, he said he hoped the new journal could provide insights to policymakers.
The new journal will be the second sister publication of Computers in Human Behavior, a monthly journal that specifies human-computer interaction and cyberpsychology, published by Dutch publisher Elsevier.