Skip to Main Content

AI (Generative Artificial Intelligence)

This guide helps students understand generative AI and how to use it responsibly to support academic research.

Reliability of AI-Generated Content


 

Magnifying glass on AI-created contentGenerative AI tools can seem pretty impressive. They often give answers that sound polished and complete – but that doesn’t always mean they’re accurate. The quality of what you get depends on a few things: how clearly you ask your prompt, what the tool is built to do, the kind of data it was trained on, and how it’s been shaped by human input behind the scenes.

Because of all these moving parts, AI can sometimes produce what are called “hallucinations” – basically, statements that sound believable but are totally off-base or made up. These tools can also reflect bias or misinformation that exists in their training data. And here’s the tricky part: it’s not always clear what that training data includes, which makes it hard to know how trustworthy the output really is.

So, while AI can be a helpful starting point, especially for brainstorming or gathering ideas, it’s important to stay critical and double-check the facts – just like you would with any other source.

What To Watch For

Generative AI tools – like those you may use for writing assistance or research support – can sometimes produce information that sounds convincing but isn't accurate. This happens when the AI "hallucinates," or generates content based on patterns it has learned, even if those patterns don't reflect real information. One common hallucination you may encounter when using AI tools to find sources for your research is fake citations: references to articles, books, or studies that don’t actually exist.

These hallucinations are more likely to occur with older or less advanced AI models, especially free versions or tools that haven’t been trained extensively in a particular subject area. As a student, it’s important to double-check any citations or facts provided by AI tools using trusted academic databases or your library’s resources.

Before you begin using AI tools for research or writing, here’s something important to keep in mind: AI isn’t neutral. These tools are trained on massive amounts of data from the internet – and that data can include bias. Sometimes it’s because the data itself is skewed. Other times, it’s because certain perspectives are underrepresented, or because the people training the AI guide it in a particular direction.

The tricky part? We often assume that machines are objective and always get it right. That assumption is called the “machine heuristic” – basically, the idea that if a computer says it, it must be true. But that’s not always the case.

As AI continues to get better at sounding human, it’s also getting harder to tell the difference between content written by a person and content generated by a machine. That means misinformation – whether accidental or intentional – can spread more easily, especially on social media where there’s little fact-checking.

So, what can you do?

  • Be skeptical of AI-generated content, especially if it sounds too good to be true.
  • Double-check facts and sources using trusted tools – like the library’s databases.
  • Use the library’s AI-powered search tools (like the new natural language search in EBSCO) to help you find credible, relevant information based on your questions.

And if you want to sharpen your skills in spotting reliable sources, check out our tutorial on evaluating credibility.

Generative AI tools need tons of training data to work well. They go through a process called deep learning, where they learn to spot patterns in the data that most of us wouldn’t notice. It’s impressive, but also a little mysterious.

Even though AI companies use human trainers to help guide how these tools behave, the way they actually work is still kind of a black box; it’s not always clear what sources the AI is pulling from when it gives you an answer. Some tools are more transparent than others—for example, a chatbot might include links to websites that back up what it’s saying.

Here’s the key: don’t take AI-generated info at face value. Just because it sounds confident doesn’t mean it’s correct. Always double-check facts and sources using trusted materials – like the library’s databases or academic search tools.