91³ÉÈËÊÓƵ

Skip to main content
Eastern Illinois University

Academic Support Center

Artificial Intelligence Guidance

Dean of Students, Faculty Development & Innovation Center,

& Student Success Center

Spring 2025


FOR STUDENTS:

Use Generative AI Responsibly

Writing and research are central skills for learning and understanding material in college. The central rule of academic integrity is doing your own work.

With regard to AI, the most important guide to follow is: if you consider using AI for any of your classes, you should always talk to your professor in advance to obtain permission to use AI tools for any of your coursework. In addition, when granted permission you should always appropriately cite the use of AI in your coursework. Please, talk to your professor about their expectations on AI use and how to cite the work.

Be aware that it is your responsibility to stay informed on whether a product or tool uses generative AI. For example, many students may not consider Grammarly as an AI tool, however, Grammarly is a writing assistant that uses advanced AI to provide real-time writing and solutions and should not be used without prior approval from the course instructor.

Remember to:

Think Critically About Generative AI

The output of generative AI models is not always accurate. It may be misleading, inaccurate, biased, and may even be made up. Use critical thinking skills and question the source and quality of training data used in the generative AI model. Carefully evaluate the results (and citations), fact check, determine accuracy/usefulness of output, and engage in class discussions to identify and address biases or inaccuracies. which will help develop your thinking about proper resources.

Ways Not to Use Generative AI

There are some things to keep in mind when using generative AI for your coursework. Here are a few:


FOR FACULTY: 

Indicators of Generative AI Content in Student Work

As part of a holistic evaluation of submitted student work, these indicators of AI-generated content may be beneficial if an instructor suspects generative AI use. The indicators below are followed by practical ideas for minimizing generative AI use and a list of resources consulted for this guidance; a downloadable PDF of this guidance is available at this link.

Pattern of formulaic or repetitive language structures.

Generative AI tends to reuse similar sentence constructions and transitions throughout a piece, particularly when connecting ideas or starting paragraphs. This creates a mechanical rhythm that differs from natural writing variation.

Inconsistent knowledge depth across sections.

Students using AI may demonstrate sophisticated understanding in some paragraphs while showing basic comprehension in others, especially when they have modified only portions of AI-generated text. This creates noticeable fluctuations in expertise level.

Context-inappropriate vocabulary or jargon.

AI systems sometimes employ specialized terminology or advanced vocabulary that seems out of place given the students’ demonstrated language proficiency in other work or class discussions. This mismatch in language sophistication can be particularly noticeable.

Lack of personal voice and/or experience integration.

When students use AI, their writing often lacks authentic personal examples, unique perspectives, or reference to class discussions. The content may be technically sound but feel detached from the student's known experiences and viewpoints.

Time-frozen or outdated references.

AI systems may reference events or sources only up to their training cutoff date, creating temporal inconsistencies. Students might submit work containing obviously outdated information or failing to incorporate recent developments discussed in class.

Perfect citation formats with inaccessible sources.

AI tends to generate plausible-looking citations that follow style guides precisely but may reference non-existent or inaccessible sources. When checked, these citations often lead nowhere or contain subtle errors in dates or publication details.

Misaligned prompt responses.

AI-generated content sometimes includes phrases that seem to respond to an input prompt rather than the actual assignment question. Look for sentences that appear to answer questions that were not asked or address requirements not included in the assignment.

Generic or vague examples.

When providing examples or evidence, AI often defaults to general, non-specific illustrations rather than detailed, concrete instances. This creates a pattern of shallow support for arguments that lacks the specificity typically found in researched work.

Inconsistent regional language patterns.

Generative AI may mix American, British, and other English variants within the same document, creating inconsistencies in spelling, terminology, and idioms that would be unusual in authentic student writing.

Smooth transitions between unrelated ideas.

While good transitions are valuable, AI sometimes creates unnaturally fluid connections between disparate concepts that do not actually relate logically, prioritizing smooth flow over meaningful relationships between ideas.

Perfect paragraph proportions.

AI tends to generate very regular paragraph lengths and structures, creating an unnaturally balanced appearance. Student writing typically shows more variation in paragraph length based on content needs.

Absence of course-specific content.

AI-generated work often fails to incorporate specific course materials, discussions, or insights that were emphasized in class. Watch for papers that seem comprehensive but do not reference any unique course content.

Mishandled nuanced topics.

When addressing complex or controversial subjects, AI often presents oversimplified or artificially balanced perspectives that do not engage meaningfully with the nuances discussed in class. This can result in sophisticated sounding, but superficial analysis.

Template-like argument structures.

AI frequently follows predictable patterns in argument construction, such as consistently using three supporting points or following rigid "however, therefore, moreover" sequences that feel mechanical rather than organic.

Error patterns that defy typical student mistakes.

While AI can make mistakes, these errors often differ from typical student writing issues. Look for unusual error patterns such as consistently perfect grammar alongside fundamental conceptual misunderstandings.


Practical Approaches for Faculty to Detect Generative AI Use:

Provide a clear statement about AI use on your syllabus. 

For ideas, visit the . If you allow the use of AI in your courses, be clear and explicit about what is and is not allowed.

Repeat your AI policy for each learning assessment and project. 

Include the policy clearly in assignment guidelines and directions and consider including a statement on your D2L course site.

Discuss information literacy and critical thinking about AI with your learners. 

Booth Library has a page on Information Literacy Instruction to support faculty across campus.

Use Baseline Writing Samples. 

Compare suspected work with in-class writing samples or earlier assignments.

Beware of AI Detection Tools. 

Do not rely on AI detection software; these are not foolproof and can regularly produce false positives or negatives.

Have A Conversation. 

Ask students to explain or expand on their written work in a live discussion. Be sure to discuss their writing process to see if it includes potential generative AI use.

Use Plagiarism Checks. 

Cross-check the content with known databases and AI output sources in Turnitin.

Design with a Focus on Process.

Redesign assignments to require drafts, notes, and evidence of revision to ensure students are engaged in the writing process.


Evidence Guidelines for Academic Dishonesty Cases Involving AI

Prevention through Course Design

Establish Clear AI Policies

Design AI-Resistant Assignments

 

Documenting Potential AI Use

Writing Style Analysis

Technical Indicators

 

Conduct Board Process

Evidence Standard

Hearing Participation

 

Documentation Checklist for University Student Standards Board (AI Cases)


Resources Consulted

Baule, S. M. (2023, September 5). 6 tips to detect AI-generated student work. ECampus News: Innovations in Education and AI.

Code.org, CoSN, Digital Promise, European EdTech Alliance, Larimore, J., & PACE. (2024). AI Guidance for Schools Toolkit.

D’Agostino, S. (2023, January 11). ChatGPT Advice Academics Can Use Now. Inside Higher Ed.

Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6, 100209.

Open AI. (2023). Teaching with AI.

Schneiderman, B. (2022). Human-centered AI. Oxford University Press.

Western Washington University. (n/d). Evaluating Student Work when AI is Suspected.

White, J. (2023, November 6). Academic Integrity in the Age of AI. EDUCAUSE Review.


Guidance updated: January 8, 2025

CONTACT THE DEPARTMENT

Academic Support Center

McAfee Gym, Room 2230
Charleston, IL 61920
(217) 581-6696
success@eiu.edu

Office Hours

Monday -Friday 8 a.m. - 4:30 p.m.