Generative AI : Home
Introduction
This guide introduces basic concepts of generative AI (artificial intelligence), including large language models (LLMs) and AI chatbots, such as ChatGPT. It provides:
- guidance on effective and ineffective uses of generative AI tools;
- resources to learn more about this topic; and
- links to other campus resources relating to generative AI.
The field of artificial intelligence has been around since the 1950s. Its primary goal has always been to replicate and understand human intelligence, but it has evolved to encompass a broader range of objectives, including algorithms and models that can mimic human behavior (e.g., generating content) and perception (e.g., vision, natural language). Generative AI is a subset of AI which uses predictive algorithms to generate content. AI is not sentient and it is not infallible.
How We Discuss Generative AI
When describing new technologies, people rely on comparisons, metaphor, and figurative language. Authors have compared generative AI and related applications to calculators, parrots, and blurry .JPEG files. All of these comparisons may capture some aspects of the technology, but are imperfect.
Many terms used to discuss generative AI in both popular and scholarly sources are words associated with human traits and behavior (including "learning," "teaching," "understanding," and even "intelligence"). As computer scientists Sayash Kapoor and Arvind Narayanan write in a post on journalistic pitfalls on AI, "[r]ather than describing AI as a broad set of tools, such comparisons anthropomorphize AI tools and imply that they have the potential to act as agents in the real world."
The authors of this guide acknowledge that uncritical use of language can help fuel undue hype around generative AI; however, due to the widespread use of this language, this guide quotes and links to sources that may use it. To read more about how language impacts our understanding of generative AI, we recommend the following pieces:
Bender, E. M. (2022, May 2). On NYT Magazine on AI: Resist the Urge to be Impressed. Medium. https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd
Haggart, B. (2023, January 31). Why it’s a mistake to compare calculators to ChatGPT. Blayne Haggart’s Orangespace. https://blaynehaggart.com/2023/01/31/why-its-a-mistake-to-compare-calculators-to-chatgpt/
Kapoor, S., & Narayanan, A. (2023, March 20). Eighteen pitfalls to beware of in AI journalism. https://www.aisnakeoil.com/p/eighteen-pitfalls-to-beware-of-in
Romero, A. (2023, March 3). On the Dangers of Overused AI Metaphors [Substack newsletter]. The Algorithmic Bridge. https://thealgorithmicbridge.substack.com/p/on-the-dangers-of-overused-ai-metaphors
The Libraries & Generative AI
Have questions about generative AI or this guide? Contact the following members of the Libraries Generative AI Interest Group:
Dave Bloom
(david.bloom@wisc.edu)
Heather Shimon
(heather.shimon@wisc.edu)
Is "Ask a Librarian" a Chatbot?
No!
You are talking to a real person when using the Ask a Librarian service.