ICMR releases guidelines for use of AI in health sector

The document outlines 10 key patient-centric ethical principles for AI application in health sector

0
151
New Delhi: The Indian Council of Medical Research (ICMR) has released Ethical Guidelines for Artificial intelligence (AI) in Healthcare and Biomedical Research to “guide effective yet safe development, deployment and adoption of AI-based technologies”.
As per the guidelines document, Diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, predicting disease outcomes, behavioral and mental healthcare and health management systems are among the recognized applications of AI in healthcare.
Link to the ICMR document here
Since AI cannot be held accountable for the decisions it makes, so “an ethically sound policy framework is essential to guide the AI technologies development and its application in healthcare. Further, as AI technologies get further developed and applied in clinical decision making, it is important to have processes that discuss accountability in case of errors for safeguarding and protection,” the document mentioned.
It outlined 10 key patient-centric ethical principles for AI application in the health sector for all stakeholders involved. These are accountability and liability, autonomy, data privacy, collaboration, risk minimization and safety, accessibility and equity, optimization of data quality, non-discrimination and fairness, validity and trustworthiness.
The autonomy principle ensures human oversight of the functioning and performance of the AI system. Before initiating any process, it is also critical to attain consent of the patient who must also be informed of the physical, psychological and social risks involved.
The safety and risk minimization principle is aimed at preventing “unintended or deliberate misuse”, anonymized data delinked from global technology to avoid cyber attacks, and a favorable benefit-risk assessment by an ethical committee among a host of other areas.
The accountability and liability principle underlines the importance of regular internal and external audits to ensure optimum functioning of AI systems which must be made available to the public. The accessibility, equity and inclusiveness principle acknowledges that the deployment of AI technology assumes widespread availability of appropriate infrastructure and thus aims to bridge the digital divide.
The guidelines also outlined a brief for relevant stakeholders including researchers, clinicians / hospitals / public health system, patients, ethics committee, government regulators and the industry. Arguing that developing AI tools for the health sector is a multi-step process involving all these stakeholders, the document noted: Each of these steps must follow standard practices to make the AI-based solutions technically sound, ethically justified and applicable to a large number of individuals with equity and fairness. All the stakeholders should adhere to these guiding principles to make the technology more useful and acceptable to the users and beneficiaries of the technology.
As per the guidelines, the ethical review process for AI in health came under the domain of the ethics committee which assess a host of factors including data source, quality, safety, anonymization, and/or data piracy, data selection biases, participant protection, payment of compensation, possibility of stigmatization among others.
The body is “responsible for assessing both the scientific rigor and ethical aspects of all health research and should ensure that the proposal is scientifically sound and weigh all potential risks and benefits for the population where the research is being carried out,” the document notes.
Informed consent and governance of AI tools in the health sector are other critical areas highlighted in the guidelines where the latter is still in preliminary stages even in developed countries. India has a host of frameworks which marry technological advances with healthcare.
These include the Digital Health Authority for leveraging Digital health Technologies under the National Health Policy (2017), the Digital Information Security in Healthcare Act (DISHA) 2018 and the Medical Device Rules, 2017.