Artificial Intelligence (AI) involves creating machines capable of performing human-based tasks, such as reasoning, problem-solving, and decision-making. Currently, AI systems are trained using data gathered from the internet, learning various patterns and ideas to inform their decisions.
To get an in-depth overview of AI and additional tools see the Introduction to Artificial Intelligence (AI) research guide created by UT Austin Library staff.
Before we can advocate for the incorporation of AI tools into the classroom, we must acknowledge their costs. Generative AI is poised to have a major environmental impact and training of AI models often comes at the expense of intellectual property rights of researchers and artists. We will go over these drawbacks of AI and some of the attempts to mitigate them before providing a rubric with which you can use to assess the ethical and practical useability of AI tools for yourself.
The energy demand of AI systems is massive, as is their need for cooling. Data centers for AI are directly responsible for delaying the closing of several coal plants (Halper & O’Donovan, 2024). To train GPT-3, Microsoft used 185,000 gallons of fresh water enough to fill a nuclear cooling tower, or put another way, about a bottle of water for every 20 questions (Gendron, 2024).
While these massive energy demands have generated a strong incentive for clean-energy startups, promises of cold fusion in the near future are dubious. As Harvard physicist John Holdren told the Washington Post, “Predictions of commercial fusion by 2030 or 2035 are hype at this point, we haven’t even yet seen a true energy break-even where the fusion reaction is generating more energy than had to be supplied to facilitate it.”
The reality is that for the time being any use of generative AI is problematic for the environment and by using it we contribute to its demand. As an instructor, your impact is magnified as AI on your syllabus may create hundreds of new users in your students.
Privacy and personal rights are at the center of many debates on AI. Models are trained by scraping images and text from social media and other corners of the web, often taking works from those without the means for a legal defense. By interacting with an AI, you are also providing it with training data. Any text typed or document or image digitized can and will be incorporated into the training model, save for when a university agreement or privacy setting specifies otherwise.
For this reason, UT policy prohibits the uploading of unpublished and nonpublic materials to chatGPT. As users, our best defense against the appropriation of our creations is to refrain from uploading any documents that are unavailable elsewhere on the web, only doing so when an agreement with UT protects our works. Similarly, ethical consideration must be paid when working with documents from sensitive collections with limited access.
Caution in use and understanding the terms of the AI tool remain our best defense, but other privacy tools are emerging. The Glaze Project is a University of Chicago endeavor to create software which inhibits an AI from using artwork to train itself. It encompasses several applications with a shared purpose: to alter an image file so that to humans it appears unchanged but machines cannot interpret it. Their software variants can prevent the machine from copying an artist’s style, or even feed misinformation into the model to ‘poison’ it. Glaze and projects like it offer hope that individuals will be able to protect their creations in the future.
A major point of contention with AI in the classroom is the matter of academic integrity: how much can a student rely on generative AI before it is no longer their work being submitted, where does use of this tool become plagiarism? This determination more often than not still falls to the instructor.
Here at UT, our AI rules remain guidelines (outside of those that overlap with security concerns and existing academic policies). Our guidelines, like the one produced by the Office of Academic Technology, offer a cautiously liberal approach that allows instructors to limit or require AI use as long as they have a clear AI policy in their syllabus and have cleared any software, even free software, with the university co-op.
Our university and others are working with a policy of informed use: provide students the resources to make their own determination on whether to use AI in light of ethical and privacy concerns, with their own work ultimately being their responsibility. In the spirit of this informed use, we offer a rubric for assessing an AI tool both ethically and practically.
This rubric, adapted from one provided by the University of Rochester Medical Center library, is designed to help you assess the utility and ethics of an AI tool. It can be useful not only for assessing the tool, but also to reinforce good practices like reading the data usage agreement in the software.
Contact:
scholarslab@austin.utexas
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 Generic License.