Skip to Main Content

Generative AI : Using Generative AI in Your Coursework

Guidance and resources for AI chatbots and other types of Generative AI

Considerations for Using Generative AI in Your Coursework

The appropriate use of generative AI in coursework varies per class and will evolve over time. If planning to use generative AI for a course assignment, consult with your instructor.

When using generative AI, be aware that:

  • you are the responsible author of the project, paper, essay, etc., and are accountable for the accuracy of the language produced and sources cited by generative AI; if you are using citations provided by generative AI, check that the sources exist and are relevant to your work
  • AI is not a person and therefore cannot take responsibility for what it generates
  • transparency when using generative AI is important. For guidance on being transparent when using generative AI, refer to Citing Generative AI.
  • the text produced by generative AI will not be the same for every user and is not a consistent source of information
  • generative AI can produce inaccurate, biased, and out-of-date content (more on this page under: "Do AI Chatbots Provide Credible Information?")
  • any use of generative AI other than where indicated by your instructor is a violation of coursework expectations and will be addressed through UW–Madison’s academic misconduct policy, specifically UWS 14.03(1)b (b) Uses unauthorized materials or fabricated data in any academic exercise

If you're an instructor seeking guidance on integrating generative AI into your courses or student use of AI in coursework, please consult the following resource.

Check Your Sources!

Being able to identify original sources of information is important not only so that you can credit the authors in your own writing, but to evaluate for accuracy and credibility.

ChatGPT, Google Bard, and Microsoft Bing Chat do not respond the same way when prompted to provide information sources, however when they do include sources, the citations look real, but are not always real (i.e., the citations refer to real experts on a topic and real journal titles, but non-existent articles).

AI chatbots cannot evaluate for accuracy or credibility. It's critical that you read the original versions of the sources they provide in order to verify that they are credible, and that the chatbot has not misrepresented their content. 

Regardless of how you search for information, be sure to locate, read, and evaluate the source. Below are some tips for locating and evaluating sources:

Do AI Chatbots Provide Credible Information?

ChatGPT and other AI chatbots are powerful tools, but have been found to be unreliable when searching for credible information.

AI chatbots:

  • are predictive text generators, not search engines; they are designed to respond to questions in a human-like manner, not to find reliable information sources

  • do not evaluate for accuracy, bias, or credibility

  • can generate factually incorrect statements

  • make up (“hallucinate”) sources

  • do not respond to prompts consistently

  • are trained on undisclosed data that: 

    • retains systemic biases found online

    • generally does not include paywalled scholarly articles

    • may not be current

Incorporating AI Chatbots into Your Research Strategies

Although it's not advised to use generative AI directly to find sources on your topic, AI chatbots may be helpful for some parts of your research process. For instance: 

  • Prompt a chatbot to help brainstorm ideas or to narrow or broaden your topic. (Just be sure to confirm accuracy as you research further.)
  • Prompt a chatbot to generate a list of keywords associated with your topic. Use these terms as you search online, in the Library catalog, or in our Library databases
  • Use a chatbot to translate or summarize sources for improved readability.