GAO & NAM Discuss Healthcare

The report Artificial Intelligence in Health Care: Benefits and Challenges of Technologies to Augment Patient Care (November 2020) https://gao.gov/assets/720/710920.pdf, was jointly published by the Government Accountability Office (GAO) and the National Academy of Medicine (NAM) https://nasonline.org.

Part One of the joint publication presents ideas on the use of AI in healthcare and the challenges involved. Part Two includes NAM’s Paper: Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic.

The GAO report was produced at the request of Congress and GAO did the study since the U.S. healthcare system is under pressure from an aging population, rising disease prevalence, such as the current pandemic, and increasing costs.

New technologies such as Artificial Intelligence (AI) could augment patient care in healthcare facilities, including outpatient and inpatient care, emergency services, and preventive care. However, the use of AI-enabled tools in healthcare raises a variety of ethical, legal, economic, and social concerns.

GAO found that AI tools have shown promise for augmenting patient care. First, Clinical AI Tools show promise in predicting health trajectories for patients, recommending treatments, guiding surgical care, monitoring patients, and supporting population health management. These tools are at varying stages of maturity and adoption, but many with the exception of population health management tools have not achieved widespread use.

Secondly, Administrative AI tools have shown promise in reducing provider burden and increasing efficiency by recording digital notes, optimizing operational processes, and automating laborious tasks. These tools are also at varying stages of maturity and adoption, ranging from emerging to widespread use.

GAO identified challenges surrounding AI tools which may impede their widespread adoption such as:

  • Difficulties in obtaining high quality data needed to create effective AI tools
  • Limitation and bias can reduce their safety and effectiveness for different groups of patients leading to treatment disparities
  • Challenges when using AI tools can exist when scaling up and integrating and when used in new settings because of differences among institutions and patient populations
  • AI tools sometimes lack transparency because of difficulties in determining how they work, and because of more controllable factors
  • When more AI systems are developed, then large quantities of the data will be in the hands of more people and organizations, adding to privacy risks and concerns
  • Liability issues are uncertain since multiple parties are involved in developing, deploying, and using AI tools which may slow adoption and impede innovation

 

In general, other factors need to be addressed such as encouraging interdisciplinary collaboration between developers and healthcare providers, encouraging stakeholders and experts to establish best practices, create opportunities for more workers to develop interdisciplinary skills, and lastly policymakers need to collaborate with stakeholders to clarify appropriate oversight mechanisms, to ensure that AI tools remain safe and effective after deployment and throughout their lifecycle.

NAM’s Part Two Paper discusses how applications exist for AI use in typical hospital and clinic settings, AI is needed for health monitoring, intervention, and promoting overall well-being outside of the hospital and clinics.

The NAM Authors of the Paper focused on the health related applications of AI specifically in environments, such as the home, at work, and in community settings. The paper is titled “Health Settings Outside the Hospital and Clinic (HSOHC).