-
Hallucinations: Sometimes AI chatbots make up information or citations to articles. This is called "hallucinating."
-
Paywalled content and scope of training data: ChatGPT and other chatbots aren't trained on the vast majority of scholarly information available through the Libraries. Keep this in mind when doing research, where peer-reviewed articles and scholarly books provide the best evidence. It is helpful to try to figure out the scope of training data for a chatbot you are using, so you know how current it is as well.
-
Reproducibility: AI chatbots are designed to create content on the fly in response to your prompt. That means you won't get the same results twice. This is a particular problem when trying to cite your sources because there is nothing stable you can point to in your citations.
-
Ethics, Privacy, etc.: There are numerous limitations related to ethics, privacy, bias, labor, and environmental impact outlined on the Ethics and Privacy page of the Libraries' AI guide.