AI Notetakers: A Helpful Tool or Hidden Liability Risk?
As artificial intelligence (AI) features advance in every aspect of work tools, seeking to make work more efficient, it can be easy to overlook where these tools appear as well as the risks associated with them. A recent lawsuit highlights the growing use — and risks — of AI notetaking tools.
What Is an AI Notetaker?
These tools use artificial intelligence to transcribe and summarize (often in real time) meetings or phone/video calls. They create a record of everything said in the meeting and, on video calls, the AI notetaker often appears as a participant. Some tools may allow for a full transcription; others may provide a meeting summary instead. These tools may be stand-alone applications or a feature (sometimes enabled automatically) of existing applications.
Risks of AI Notetakers

AI notetakers pose several different risks that must be considered before or as they are deployed on your campus or in your school. Among them:
- Legal and Compliance: Notetakers may run in the background. Depending on your state, or certain compliance requirements, you may need to disclose your use of these tools and seek permission from attendees. In some states, silent use of these tools (even using them in the background) may violate the law.
- Cybersecurity: These notetakers can open your institution to a host of cybersecurity risks if they are not managed properly, including allowing unauthorized access to your important and confidential information.
- Privacy and Privilege: Because notetakers are often third-party applications, their use may expose sensitive information to an outside entity. They could breach confidentiality laws (such as the Family Educational Rights and Privacy Act (FERPA)) or agreements with third parties as well as dissolve the attorney-client privilege for certain conversations.
- Accuracy: Tools providing summaries and action items from the meeting, not merely transcripts, may not be an accurate representation of what was discussed. Tools can mischaracterize discussions or decisions or can create new information that was not included in the conversation.
- Reputation: Full conversations may be recorded, stored, and transcribed or summarized. Accidental misstatements or fulsome discussions of sensitive topics could prove harmful to your institution’s reputation if recordings or transcripts are released. People also may be reluctant to speak freely if they know conversations are being recorded in each meeting.
Before using or enabling an AI notetaker, users should confirm:
- The tool was vetted according to your institution’s policy and data is not shared with outside parties or used to train models.
- The circumstance is an approved situation for use.
- Your institution’s recording and privacy rules are satisfied, and consent from participants is obtained.
- All sensitive data is being protected and stored securely.
- Transcripts will not create legal, accuracy, or reputational exposure.
Risk Mitigation
To avoid or reduce the risk of AI notetakers, consider taking these steps:
- Use authorized tools only. Require any AI notetakers to be vetted under your current AI governance structure and IT use policy. Work with your IT department, along with others across your institution, to ensure that the risks, terms and conditions of use, and storage and safety protocols adhere to your institution’s standards.
- Review terms of use. Ensure your IT department works with legal counsel to review and understand the terms of use for these tools. A thorough review of these terms may reveal hidden risks or security concerns (such as allowing training on data obtained by the tool, or data transmission to a foreign country) that would disqualify the tool from use. If you decide to use these tools, look for one that complies with your legal requirements and risk tolerance.
- Limit circumstances of use. Set forth rules about when these tools can, and cannot, be used. Provide specific examples and plainly state any situations in which the tools must not be used, such as conversations with legal counsel, discussions involving private information, or other situations your institution deems too risky.
- Turn off automatic join/record settings. Some tools embedded in meeting platforms automatically join and record these meetings. Determine whether the default setting should be to “off,” thus requiring affirmative actions to enable an AI notetaker to join or participate in a meeting.
- Require consent. Regardless of whether your state requires one- or two-party consent to record calls, make it your institution’s practice to announce the presence of an AI notetaker and request consent from all parties. If someone doesn’t want their conversation recorded, respect their wishes and take notes the old-fashioned way.
- Store transcripts locally. To avoid the risk of sensitive information falling into the wrong hands, discuss storing any recordings or records the tool produces on a local server rather than in the “cloud” or on the provider’s servers.
- Require human review. Because accuracy is important — and AI tools are known to make mistakes — require a human to review and correct any records produced, including transcripts, summaries, or action items, to ensure they contain correct information before being distributed to participants.
- Follow records retention rules. The records produced by using AI notetakers may be considered an education record under FERPA (depending on the nature of the conversation) or become evidence in a future legal skirmish. For public institutions, they also may constitute public records that are subject to disclosure. Ensure these records are captured under your records retention policy and that you adhere to the requirements of your institution’s policy.
If your institution adopts AI notetakers, train staff on their proper use cases, including use limitations, and how to disable the tool.
About the Author
-
Heather Salko, Esq.
Manager of Risk Research
Heather oversees the development of risk research publications. Her areas of expertise include employment law, Title IX, and student mental health. Before joining the Risk Research team, she practiced employment and insurance coverage law and handled UE liability claims for more than a decade.