The Accreditation Council for Continuing Medical Education (ACCME) recently released their official guidance on the responsible use of AI. This is to help providers navigate the growing role of AI tools in continuing medical education, and ensure they are implementing it responsibly in their workflows.
Why This Matters for CME Providers
Many CME organizations are already using AI, whether intentionally or indirectly, through tools embedded in their learning management systems, assessment builders, or even their analytics dashboards. This ACCME guidance makes it clear that existing accreditation expectations still apply, even when AI becomes part of that workflow.
For CME Providers, this update is important since it:
- Encourages thoughtful, evidence-based use of AI in education design.
- Clarified expectations around integrity and transparency when AI plays a role.
- Reinforces that accrediting standards still apply when using AI tools.
Key Highlights
Safeguard Independence and Mitigate Bias
AI-assisted content must still meet the highest standards of education integrity. Provides must ensure that:
- AI-generated or AI-assisted materials do not introduce bias or unintentional influence.
- Commercial interests have no influence on recommendations, content structure, or learning pathways generated by AI.
- Providers maintain complete control over all educational content, whether created by Humans or AI.
Transparency and Disclosure When Using AI
ACCME expects providers to be transparent about when AI tools are used in educational development. This includes disclosing:
- The name and version of the AI tool used.
- The purpose of its use (e.g., drafting content, analyzing learner data, etc.)
- Clear statements to learners when materials have been created or edited with AI.
Note: Basic grammar or spelling assistance does not require disclosure.
Human Oversight Remains Essential
AI should augment, not replace, human decision-making. Even if AI contributes to content generation or analysis, providers bear responsibility for:
- Ensuring the accuracy and validity of educational content.
- Confirming clinical recommendations are grounded in current science, evidence, and clinical reasoning.
- Reviewing assessments and evaluations of tools created or advanced by AI.
What This Means for LMS-Delivered CME
If your organizations deliver CME activities through a learning management system (LMS), you are likely to be affected in areas such as:
- Curriculum design workflows: Providers can experiment with AI for drafting and organizing materials, but human review and approval remain mandatory.
- Assessment creation: AI can be a source of case scenarios and questions, but providers must validate clinical accuracy and bias.
- Learner analytics and evaluation: AI tools can help summarize large evaluation datasets, but interpretations and decisions must be defensible.
- Documenting compliance in LMS reporting: When AI is used to generate or edit materials stored in your LMS, ensure disclosure and content governance practices are documented to support accreditation compliance.
Next Steps for CME Providers
- Review your AI tool usage across your educational design and deliver workflows.
- Update internal policies on AI use in content creation and LMS integration.
- Create templates or disclosures for when AI is used in learning materials.
- Train staff and faculty on ACCME expectations for oversight.
.png)