While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying fully on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.
Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Additionally, the information presented by generative AI tools may lack currency as some of the systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape.
Another potentially significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.
Related Recommendations
Meticulously fact-check all of the information produced by generative AI, including verifying the source of all citations the AI uses to support its claims.
Critically evaluate all AI output for any possible biases that can skew the presented information.
Avoid asking the AI tools to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false citations.
When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
Always remember that generative AI tools are not search engines--they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms.
Selected Readings
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Curēus, 15(2), e35179–e35179. https://doi.org/10.7759/cureus.35179
Ara, A., & Ara, A. (2024). Exploring the ethical implications of generative AI. IGI Global. https://doi.org/10.4018/979-8-3693-1565-1
Bowman, E. (2022, December 19). A new AI chatbot might do your homework for you. But it's still not an A+ student. NPR.
De Vynck, G. (2023, August 16). ChatGPT leans liberal, research shows. The Washington Post.
Drahl, C. (2023, October 6). AI was asked to create images of Black African docs treating white kids. How'd it go?. NPR.
Metz, C. (2021, March 15). Who is making sure the A.I. machines aren’t racist? New York Times [Digital Edition].
Walters, W. H., & Wilder, E. I. (2023). Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports, 13(1), 14045–14045. https://doi.org/10.1038/s41598-023-41032-5
There are currently also multiple privacy concerns associated with the use of generative AI tools. The most prominent issues revolve around the possibility of a breach of personal/sensitive data and re-identification. More specifically, most AI-powered language models, including ChatGPT, require for users to input large amounts of data to be trained and generate new information products effectively. This translates into personal or sensitive user-submitted data becoming an integral part of the collection of material used to further train the AI without the explicit consent of the user. Moreover, certain generative AI policies even permit AI developers to profit off of this personal/sensitive information by selling it to third parties. Even in cases when clear identifying personal information is not entered by AI user, the utilization of the system carries a risk of re-identification as the submitted dataset may contain patterns allowing for the generated information to be linked back to the individual or entity.
Related Recommendations
Avoid sharing any personal or sensitive information via the AI-powered tools.
Do not download Library materials (i.e., articles, ebooks, infographics, psychographics, or other datasets) into AI as it is prohibited.
Always review the privacy policy of the generative AI tools before utilizing them. Be cautious about policies that permit for the inputted data to be freely distributed to third-party vendors and/or other users.
Selected Readings
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy. IEEE Access, 11, 80218–80245. https://doi.org/10.1109/ACCESS.2023.3300381
Hunter, T. (2023, April 28). Why you shouldn’t tell ChatGPT your secrets. The Washington Post.
Schneier, B., & Sanders, N. (2023, July 20). Can you trust AI? Here’s why you shouldn’t. The Conversation U.S.
Shafiq, M (2023, September 25). Understanding the ethics of data generative AI: Privacy concerns and solutions. LinkedIn.
This work is licensed under a Creative Commons Attribution NonCommercial 4.0 International License. | Details of our policy

