• Compliance
  • Insights
  • Higher Ed
  • K-12

AI Liability Risks for Educational Institutions: 5 Considerations

Heather Salko, Esq.
April 2026
Take steps to help ensure your institution’s AI use does not create unintended liability.

As artificial intelligence (AI) applications reach widespread use, products based on or embedded with AI technology are everywhere on campus and in your institution. While the use cases are ever-changing and the benefits of AI come into focus, liability risks for institutions also are emerging — sometimes in unexpected ways or places.  

Below are some risks your institution should be thinking about along with questions to consider. 

1. Data Privacy and Security 

Your institution’s AI-enabled systems collect and process significant amounts of data, much of which is personal (to students, employees, donors, and vendors).  

This valuable information is protected by a web of laws ranging from state privacy laws to federal laws including the Family Educational Rights and Privacy Act (FERPA). Violations could result in fines or lawsuits.  

It is important to understand what data your AI-enabled systems have access to, where that data is stored, and whether the AI systems are using your data for training.  

  • Are AI tools vetted through legal, privacy, and IT security review processes and subject to an AI use policy? 
  • Do your vendor contracts clearly prohibit unauthorized secondary data use, such as for training AI models?
  • Do you explicitly preserve institutional data ownership in your vendor contracts?
  • Are deletion, return, and retention obligations clearly defined in vendor contracts?
  • Has your institution identified and prohibited “shadow AI” use (unauthorized tools or unauthorized purposes) in its AI policy, instead allowing only vetted and authorized tools and features? 

2. AI-Generated Instructional Content and Academic Grading 

Your faculty likely are using AI tools in creative and innovative ways, including to create course outlines and content. They also may be using tools to evaluate or grade student work.  

It is important to, while respecting academic freedom, set boundaries on how AI can and should be leveraged when creating course content or evaluating student work. These boundaries should align with your institution’s AI use policy as well as its academic integrity policy. Failure to protect student data privacy or to allow unchecked use could result in allegations of bias and discrimination, intellectual property violations, or breach of contract for educational services. 

  • If you allow use of AI-generated instructional materials, are they subject to human review before use in the classroom?
  • Are faculty and staff trained to understand that AI outputs are not authoritative sources and that all underlying facts must be confirmed?
  • Does your faculty and staff training address AI’s risks of bias and hallucinations?
  • Do you require human review of any AI-created instructional materials to ensure age-appropriateness?
  • Do you prohibit the use of AI tools to make high-stakes academic decisions, including assigning grades, without human oversight?
  • Is any course material or grading review process documented to demonstrate reasonable care was used? 

3. Student AI Use and Academic Integrity 

In addition to faculty and staff use of AI-enabled technology, students also have embraced AI.  

As noted above, while addressing AI use by students overall in your academic integrity policy, allowable use of AI may be determined on a course-by-course or assignment-by-assignment basis by faculty. Students could face allegations of cheating, resulting in lawsuits for breach of contract or discrimination.  

  • Does your academic integrity policy explicitly address student AI use in academic assignments?
  • Do you encourage faculty to discuss AI use allowances or prohibitions in their classes or on specific assignments?
  • Is AI detection software (if your institution allows its use ) used only as one factor in an academic integrity policy violation, not as conclusive evidence?
  • Do you ensure enforcement practices are consistent across programs and instructors?
  • Does your academic integrity and/or disciplinary policy require students receive clear notice, an opportunity to respond, and the right to appeal?
  • Are any disability accommodations coordinated with AI use-related policies?
  • Do you have an AI literacy and ethical use training program in place to guide students on when, why, and how to use AI in their academic work? 

4. Monitoring and Student Privacy 

Automatic proctoring tools, security systems, behavioral and engagement monitoring, and automated academic success tracking all involve AI. These tools, while helpful, can also invade student privacy or unintentionally discriminate against students. Students could bring lawsuits alleging bias or breach of contract if you promise to protect student data. Each tool should be evaluated to determine if its features are appropriate under your AI use policy, state privacy laws, and other state or federal laws.  

  • Do you require a privacy impact assessment before deployment?
  • Do you ensure the tools are tested for bias?
  • Is AI monitoring, such as test proctoring, limited to a clearly defined, documented purpose, and for a limited amount of time?
  • Are students and parents informed about what data is collected and how it is used?
  • Are data retention and access controls clearly defined?
  • Do your vendor contracts address who owns and controls the data, where and how the data is stored, and data destruction?
  • Is there human review before disciplinary or safety actions are taken if the AI system detects a problem?
  • Are AI outputs treated as advisory rather than determinative?
  • Are students provided an opportunity to “opt out” where appropriate? 

5. Student Mental Health 

With the ever-present focus on student mental health along with rising demand and limited resources, some educational institutions are turning to AI-powered solutions to address support gaps. It is important to monitor the use of these technologies, not become over-reliant on them, and to know when human intervention is appropriate. 

  • Do you ensure any AI tools are clearly not positioned as mental health counseling?
  • Do pre-use disclosures clarify that AI is not a substitute for licensed professionals and that students in distress should reach out to a campus counselor, and do you provide that contact information?
  • Are AI-flagged concerns reviewed and escalated to trained counselors?
  • Are counseling and student affairs teams involved in AI tool use oversight?
  • Are institutional responsibilities and limitations clearly documented?
  • Are AI-based mental health tools tested for bias before — and after — deployment?
  • What steps are taken to maintain student data privacy, and are extra protections provided for health information? 
  • Does your vendor contract require data privacy compliance? 
  • Do you work with legal counsel to ensure contracts contain appropriate insurance and indemnity clauses? 

More From UE

AI Notetakers: Helpful Tool or Hidden Liability Risk? 

Are You Prepared for “Smart” Glasses? 

Managing Student Data Privacy: A Guide for Business School Officers 

Data Security Course Collection 

Added to My Favorites

This content was added to My Favorites.

Go to the Document Center