Skip to Main Content
University of Texas University of Texas Libraries

Athletic Tutors Guide

Evaluating AI Tools and Output

Evaluating AI Tools and Output

When using artificial intelligence, it is important to evaluate the tool itself and the tool’s output critically. Ask yourself these questions:

  • What is the purpose of the tool?
  • How is this tool funded? Does the funding impact the credibility of the output?
  • What, if any, ethical concerns do you have about this tool? 
  • Does the tool asks you to upload existing content such as an image or paper? If so, are there copyright concerns? Is there a way to opt out of including your uploaded conent in the training corpus? 
  • What is the privacy policy? If you are assigning this tool in a class, be sure to consider any FERPA concerns. Faculty may also reach out to the Office of Academic Technology for guidance. 
  • What corpus or data was used to train the tool or is the tool accessing? Consider how comprehensive the data set is (for example, does it consider paywalled information like that in library databases and electronic journals?), if it is current enough for your needs, any bias in the data set, and algorithmic bias.
  • If reproducibility is important to your research, does the tool support it?
  • Is the information the tool creates or presents credible? Because generative AI generates content as well as or instead of returning search results, it is important to read across sources to determine credibility.
  • If any evidence is cited, are the citations real or "hallucinations" (made up citations - see the glossary).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 Generic License.