If you are required to write a formal lab report the report should contain the following parts:
Most of the AI you are hearing about - like ChatGPT - is generative AI. This means simply that the AI tool is creating something "new" from already existing information that has been uploaded (or fed) into the AI tool. This can be numerical data, images, Wikipedia entries, journal articles, personal information - the options are endless. What can Generative AI be useful for?
See caveats and red flags for other important information.
Let's get the big one out of the way: It's your responsibility to know and abide by University (including the Academic Integrity Policy and Student Code of Conduct) and professor's policy regarding AI.
Here's something helpful from UMD Libraries (see link below for more):
As of 2023, a typical AI model isn't assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.
AI can be wrong in multiple ways:
It can give the wrong answer
It can omit information by mistake
It can make up completely fake people, events, and articles
It can mix truth and fiction
Additionally, remember that AI operates on the information if it's been given. If that information is incorrect, false, inaccurate, biased, misleading, etc., the generative output from the AI can be all of these things as well. Keep in mind GIGO (Garbage In, Garbage Out).
How did some of these play out in what we did today?
More info in the links below!