Generative AI : Home
This guide introduces basic concepts of generative AI (artificial intelligence), including large language models (LLMs) and AI chatbots, such as ChatGPT. It provides guidance on effective and ineffective uses of generative AI tools, suggests resources to learn more about this topic, and provides links to other campus resources relating to generative AI.
The field of artificial intelligence has been around since the 1950s. Its primary goal has always been to replicate and understand human intelligence, but it has evolved to encompass a broader range of objectives, including algorithms and models that can mimic human behavior (e.g., generating content) and perception (e.g., vision, natural language). Generative AI is a subset of AI which uses predictive algorithms to generate content. AI is not sentient and it is not infallible.
More on the history of AI from the Heinz College of Information Systems and Public Policy at Carnegie Mellon University.
How We Discuss Generative AI
When describing new technologies, people rely on comparisons, metaphor, and figurative language. Authors have compared generative AI and related applications to calculators, parrots, and blurry .JPEG files. All of these comparisons may capture some aspects of the technology, but are imperfect.
Many terms formally and informally used to discuss generative AI in both popular and scholarly sources are words associated with human traits and behavior (including "learning," "teaching," "understanding," and even "intelligence"). As computer scientists Sayash Kapoor and Arvind Narayanan write in a post on journalistic pitfalls on AI, "[r]ather than describing AI as a broad set of tools, such comparisons anthropomorphize AI tools and imply that they have the potential to act as agents in the real world."
The authors of this research guide acknowledge that uncritical use of language can help fuel undue hype around generative AI; however, due to the widespread use of this language, this guide quotes and links to sources that may use it. To read more about how language impacts our understanding of generative AI, we recommend the following pieces:
Bender, E. M. (2022, May 2). On NYT Magazine on AI: Resist the Urge to be Impressed. Medium. https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd
Haggart, B. (2023, January 31). Why it’s a mistake to compare calculators to ChatGPT. Blayne Haggart’s Orangespace. https://blaynehaggart.com/2023/01/31/why-its-a-mistake-to-compare-calculators-to-chatgpt/
Kapoor, S., & Narayanan, A. (2023, March 20). Eighteen pitfalls to beware of in AI journalism. https://www.aisnakeoil.com/p/eighteen-pitfalls-to-beware-of-in
Romero, A. (2023, March 3). On the Dangers of Overused AI Metaphors [Substack newsletter]. The Algorithmic Bridge. https://thealgorithmicbridge.substack.com/p/on-the-dangers-of-overused-ai-metaphors
Some important concepts for understanding the artificial intelligence landscape:
Algorithm: "a set of rules or instructions that tell a machine what to do with the data input into the system."
Deep Learning: "a method of machine learning that lets computers learn in a way that mimics a human brain, by analyzing lots of information and classifying that information into categories. Deep learning relies on a neural network."
Generative AI: a "system [that] takes in data and then uses predictive algorithms (a set of step-by-step instructions) to create original content. In the case of a large language model (LLM), that content can take the form of original poems, songs, screenplays, and the like produced by AI chatbots such as ChatGPT and Google Bard. The 'large' in LLMs indicates that the language model is trained on a massive quantity of data. Although the outcome makes it seem like the computer is engaged in creative expression, the system is actually just predicting a set of tokens and then selecting one."
Hallucination: "a situation where an AI system produces fabricated, nonsensical, or inaccurate information. The wrong information is presented with confidence, which can make it difficult for the human user to know whether the answer is reliable."
Large Language Model (LLM): "a computer program that has been trained on massive amounts of text data such as books, articles, website content, etc. An LLM is designed to understand and generate human-like text based on the patterns and information it has learned from its training. LLMs use natural language processing (NLP) techniques to learn to recognize patterns and identify relationships between words. Understanding those relationships helps LLMs generate responses that sound human—it’s the type of model that powers AI chatbots such as ChatGPT."
Machine Learning (ML): "a type of artificial intelligence that uses algorithms which allow machines to learn and adapt from evidence (often historical data), without being explicitly programmed to learn that particular thing."
Natural Language Processing (NLP): "the ability of machines to use algorithms to analyze large quantities of text, allowing the machines to simulate human conversation and to understand and work with human language."
Neural Network: "a deep learning technique that loosely mimics the structure of a human brain. Just as the brain has interconnected neurons, a neural network has tiny interconnected nodes that work together to process information. Neural networks improve with feedback and training."
Token: "the building block of text that a chatbot uses to process and generate a response. For example, the sentence 'How are you today?' might be separated into the following tokens: ['How', 'are', 'you', 'today', '?']. Tokenization helps the chatbot understand the structure and meaning of the input."
Monahan, J. (2023, July). Artificial intelligence, explained. Carnegie Mellon University's Heinz College. https://www.heinz.cmu.edu/media/2023/July/artificial-intelligence-explained
Is "Ask a Librarian" a Chatbot?
AI Literacy - Key Points
"AI literacy" as defined by Long and Magerko, includes the ability to "critically evaluate AI technologies" and to "use AI as a tool online, at home, and in the workplace."
When using generative AI, considering the following key points:
- you are responsible and accountable for any content generated by AI that you incorporate into your work, projects, etc.
- generative AI can produce inaccurate, biased, and out-of-date content due to limitations in its data sources
- UW-Madison restricts entering institutional data into any generative AI tool or service such as ChatGPT, Google Bard, etc. Consider your own data privacy when using these tools and services
- prompt engineering is a skill set to develop