The following is from Becker’s Healthcare.
The U.S. Department of Health and Human Services (HHS) has taken the initial step in regulating emerging AI tools and algorithms within the healthcare sector.
On Dec. 13, the agency released the “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” rule.
This rule is the agency’s attempt at increasing transparency regarding the use of artificial intelligence in clinical settings, according to a press release from the HHS.
Five things to know about the rule:
- This initiative introduces transparency requirements for certified health IT and software developers, particularly focusing on AI and predictive algorithms such as models that analyze medical imaging, generate clinical notes and notify clinicians of potential risks to patients.
- The goal of the rule is to require developers to provide healthcare organizations with information and data that they can use to evaluate whether their algorithms are promoting fairness, appropriateness, validity, effectiveness and safety.
- Under the rule, developers must provide organizations with details on the software’s development process and functionality. This includes disclosing funding sources, specifying its intended role in decision-making and providing guidelines on when caution is warranted for clinicians using it.
- Developers will also need to inform customers about the AI’s training data. Additionally, they will be required to disclose performance metrics, elaborate on ongoing performance monitoring procedures and outline the frequency of algorithm updates.
- By the end of 2024, healthcare professionals utilizing decision support software certified by the HHS will be subject to these regulations.
The rule comes at a time when many hospitals and health system executives called on organizations like the HHS to create a structured framework to guide the ethical advancement and application of AI in healthcare.
Link to article here.