AI Acceptable Use Policy

AI Acceptable Use Policy for CMM College of Theology

1. Purpose

This policy provides clear, concise guidance for the responsible use of Artificial Intelligence (AI) within CMM College of Theology.

This framework applies to all students, faculty, and staff and is organized into two sections: Student AI Policy and Faculty & Staff AI Policy.

As AI is rapidly evolving, this policy will be updated from time to time and is subject to change.


2. Guiding Principles

All use of AI within the CMM College of Theology should be guided by the following principles:

  • Human Dignity: AI must support—not replace—human judgment, vocation, and relationships.
  • Theological Integrity: AI use should align with the school’s theological commitments and educational mission.
  • Ethical Responsibility: Transparency, fairness, privacy, and accountability are essential.
  • Formation and Community: AI should enhance learning and formation without undermining embodied community or mentorship.

Part I: Student AI Acceptable Use Policy

3. Permitted Uses for Students

Students may use AI tools when permitted by the instructor for purposes such as:

  • Studying and comprehension support
  • Grammar, spelling and writing correction
  • Language learning (e.g., biblical languages)
  • Brainstorming or outlining ideas
  • Students and faculty acknowledge AI is a tool just like libraries, Google searches, etc.and proper citations and source credit rules apply as found in the school catalog.
  • Students acknowledge that AI may not be accurate. If they use incorrect AI information, the student is responsible, and it may affect their grade.

AI should assist learning, not replace critical thinking, theological reflection, or original work. 


4. Student Responsibilities

Students are responsible for:

  • The Bible is first and foremost the main source for every student.
  • Following all course-specific guidelines as given by the instructor on AI use, students must produce original writing through theological reflection and critical thinking prior to using AI tools for assistance.
  • Students must explicitly disclose the AI tool used (e.g., ChatGPT, Claude, Grammarly, Copilot) and describe how it was used (e.g., brainstorming, outlining), when required
  • Ensuring submitted work reflects their own thinking, analysis, and theological judgment
  • Respecting privacy and confidentiality when using AI tools

Failure to follow these expectations may be treated as an academic integrity violation.


5. Prohibited Student Uses

Students may not:

  • Submit AI-generated content as their own work without permission or disclosure
  • Use AI to complete exams, quizzes, or graded assessments unless explicitly allowed
  • Input sensitive or confidential information into AI systems
  • Rely on AI in ways that bypass learning objectives or formation goals

Part II: Faculty and Staff AI Acceptable Use Policy

6. Permitted Uses for Faculty and Staff

Faculty and staff may use AI tools to support:

  • Teaching, course design, and assessment preparation
  • Research assistance (e.g., text analysis, translation, data organization)
  • Administrative tasks and student support services
  • Accessibility and inclusion initiatives

Human oversight and accountability are required in all cases.


7. Faculty and Staff Responsibilities

Faculty and staff are responsible for:

  • Modeling ethical, transparent, and thoughtful AI use
  • Clearly communicating expectations for student AI use (e.g., in syllabi and applicable restrictions)
  • Retaining responsibility for academic judgment, grading, and decision-making
  • Critically evaluating AI outputs for accuracy, bias, and theological appropriateness

AI must not replace faculty mentorship, pastoral care, or evaluative authority.


8. Restricted and Prohibited Uses

Faculty and staff may not:

  • Use AI for high-stakes decisions (e.g., grading, admissions, discipline) without meaningful human review
  • Deploy AI systems that compromise privacy, confidentiality, or pastoral trust
  • Use AI tools that perpetuate bias, discrimination, or theological distortion

9. Data Privacy and Security

  • Sensitive information (student records, pastoral materials, personal data) must not be entered into external AI systems without approval
  • AI tools must comply with institutional privacy policies and applicable laws
  • Data use and storage practices should be transparent

10. Oversight and Review

The School of Theology will provide oversight of AI use through designated academic or administrative leadership.

This policy will be reviewed periodically to remain responsive to technological developments, ethical concerns, and theological reflection.


11. Conclusion

This simplified AI Policy Framework affirms that Artificial Intelligence can be a valuable tool for theological education when used responsibly. By maintaining human oversight, ethical clarity, and theological grounding, the CMM College of Theology commits to integrating AI in ways that deepen learning, support formation, and serve the common good.

This policy will be updated from time to time and is subject to change.