Many university community members are understandably concerned about the implications of generative AI tools and academic integrity. Appropriately using and citing the work of others, building on that with one’s own critiques and ideas, and clearly and correctly representing one’s own work and contributions, are some of the aspects of academic integrity that use of generative AI could impact. Some may wonder, for example,whether students using such tools to complete assignments would count as academic misconduct. To address this question and others, Provost’s Offices at both UBCV and UBCO have created a helpful list of questions and answers about ChatGPT and academic integrity on the UBC Academic Integrity website.
One area of interest for many is whether there exist tools to reliably detect the probability of a text having been written by an AI. This is a rapidly changing space, with many new and existing organizations working on such detectors. Currently their efficacy, and ways to evade them, are also rapidly evolving, and as the Q&A on the Academic Integrity website notes, they are by no means foolproof and should not be used as the sole basis of a decision on whether academic misconduct has occurred. Consider also privacy issues with entering student work into a third-party platform without their consent, and that has not undergone a privacy impact assessment; be sure to not enter any personal or identifying information.
See also an editorial by two faculty members on “The Opportunities of ChatGPT” from the inaugural Academic Integrity Digest newsletter, with helpful reflections on both the challenges generative AI writing tools present, but also the potential “to make responsible, ethical use” of them in teaching, in collaboration with students.
Besides students using generative AI tools in an unauthorized manner to complete assignments, another consideration related to academic integrity relates to the importance of accurately attributing and citing the words and ideas of others in one’s own work, as part of scholarly integrity. Some text-generation tools, such as ChatGPT, don’t cite the sources of the information in the text they generate, or if they do, these may be fictional. Students using such tools, say for idea generation, would not have access to the original sources in order to properly cite those in their own work. They may therefore inadvertently include ideas from other sources in their written work that should be cited, and could have been if the students did the research themselves. It would be useful to talk with students about this situation, noting that they should always verify information from generative AI tools as there can be errors, and by verifying they can also provide their own sources. (Note: some generative AI tools connected to the internet, such as Bing Chat and Perplexity, do link to sources for information provided.)
In this resource, we provide suggestions around communicating with students about generative AI tools, as well as designing assessments in ways that can support academic integrity as well as student learning.