Hallucinations and artificial intelligence (AI) certainly sound like an odd combination. One is falsely perceiving the presence of something, and the other is a type of technology. 

However, hallucinations in generative AI refer to mistakes. Entities utilizing generative AI must address potential hallucinations to ensure the services they offer to consumers are beneficial — and responsible. What exactly is generative AI? It refers to AI techniques that can be used to create or produce various types of new content, including text, images, audio and videos.  

Used in conjunction with cloud computing, generative AI is poised to bring about major transformations in the healthcare space. A prime example is conversational AI, which consists of computer systems that communicate with users through natural language user interfaces involving images, text and voice and automates more natural, human-like interactions between computers and users.  

By relying on deep-learning algorithms to create new content, generative AI can take unstructured data sets and analyze them. As McKinsey & Company notes, this represents a potential breakthrough for healthcare operations, which are rich in unstructured data such as clinical notes, diagnostic images, medical charts and recordings. 

According to Accenture, 98 percent of healthcare providers and 89 percent of payer executives believe that advancements in generative AI are ushering in a new era of enterprise intelligence. And, 40 percent of all working hours in healthcare could be supported or augmented by language-based AI. 

Again, though, this technology must be employed responsibly by healthcare providers and other entities. That means maintaining attention to possible issues that arise in generative AI, including those hallucinations. Other complications that can occur with this type of AI are algorithm bias, poor commonsense reasoning and the lack of generally agreed-upon model evaluation metrics. One of the biggest in the healthcare industry is the concern about the security of protected health information (PHI). 

At Providertech, we know that enabling the use of accurate and diverse data is paramount to a responsible generative AI platform. And, that any conversational AI utilized in healthcare must adhere to strict security standards.

Ensuring Accuracy, Objectivity and Fairness

Algorithm bias, often referred to as AI bias, describes how artificial intelligence technology can be negatively impacted by social, economic and systemic biases. Another explanation of AI bias is the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability or sexual orientation and amplifies inequities in health systems. A primary reason for this occurrence is that medical researchers cannot easily procure large, diverse medical data sets. 

Algorithm bias is a problem affecting a healthcare system that already displays barriers to healthcare equity, including unequal testing and treatment, bias in medical research and data and institutional racism. Research data and medical records are less likely to represent black patients’ information adequately and accurately due to socioeconomic inequalities in healthcare delivery. Also, racial minorities and those living in poverty tend to receive lower-quality care than non-Hispanic whites and people with higher levels of disposable income and accumulated wealth. 

A perspective paper by Stanford University researchers theorized that, to ensure the safe and effective use of AI in healthcare, researchers and developers will need to work to eliminate bias in these tools by consistently evaluating them. Another best practice for achieving unbiased healthcare algorithms is investing in programs that promote data science training and offering grants and scholarships to encourage greater interest in the field among diverse communities. 

Mitigating AI bias can be achieved by the implementation and maintenance of strict data standards that promote accuracy and fairness. Healthcare providers using AI to interact with patients also should develop a training framework – for patients and staff – that accounts for a wide variety of social determinants of health (SDOH). 

How does our conversational AI solution curtail algorithm bias? One of the main ways is by incorporating task-specific models to manage appointments and health data. This markedly mitigates biases more effectively than open-ended Large Language Models (LLMs), making sure decisions are based on objective criteria. Our approach not only enhances service equity and reliability but also strictly adheres to HIPAA compliance to safeguard patient data privacy and security. 

Keeping Security at the Forefront 

The goal of utilizing conversational AI for patients is to provide them with answers to common questions and conveniently present educational information electronically without the need for a lengthy phone call that doesn’t meet their communication preferences. It should never be used to give medical advice, including whether a patient should visit the emergency department based on specific symptoms. 

Healthcare providers should be transparent about the type of generative AI they are using, including the source(s) of their information. They must prioritize informed consent protection and mitigate concerns about possibly leaking protected health information (PHI). 

Any generative AI solution offered by providers must ensure compliance with current government healthcare regulations while monitoring future ones. This commitment to security helps alleviate mistrust among doctors and healthcare consumers, one of the main barriers to the adoption of generative AI technology. 

Keeping security at the forefront, external system interactions though Providertech’s conversational AI solution are designed to ensure we never store PHI unnecessarily. This “touch-and-go” method allows us to access necessary information without retaining sensitive data, further reducing potential risk points. Also, regular backups verify that healthcare data is never lost and ensure that data recovery options are always available without compromising security.

Components of Responsible Use of Generative AI Solutions for Healthcare 

Key to responsibly utilizing generative AI in healthcare is ensuring that it augments operations — not replace them. Solutions should be designed with input from healthcare providers along with professionals with experience in technology for the industry. 

Workflow integration is essential for a generative AI solution that benefits providers and their staff. The technology should seamlessly integrate into existing healthcare workflows to automate tedious administrative tasks instead of creating more of a burden. If a provider’s conversational AI offering is not designed to integrate with other healthcare technology systems, it has the potential to result in inefficiencies and errors. 

To provide healthcare providers with more best practices for responsibly implementing generative AI, we’re sharing tips from two extremely reputable sources: The American Hospital Association and the World Economic Forum. 

The AHA’s tips on generative AI use include:

  • Take a people-first approach.

Focus on people as much as technology. Ramp up talent investments to address both creating and using AI. Develop technical competencies like AI engineering and enterprise architecture and train people across the organization to work effectively with AI-infused processes.       

  • Invest in a sustainable tech foundation.

Consider requirements for infrastructure, architecture, operating model and governance structure to leverage generative AI and foundation models — keeping a close eye on cost and sustainable energy consumption.

  • Deliver responsible AI.

Build controls for assessing risks at the design stage and embed responsible AI principles and approaches throughout your organization. 

Recommendations from the World Economic Forum are:

  • Build trust through empathy and domain expertise.

Prioritize fine-tuning of models on healthcare-specific data, and have doctors test and rate responses to improve outputs and instill models with empathy.

  • Keep humans in the loop.

Implement clinical processes that maintain human oversight, especially in high-risk discussions and with patients with severe disease.

  • Plan to scale across contexts.

Stakeholders must work to create flexible deployment models that recognize varying needs across regions. 

At Providertech, our conversational AI solution for patient engagement is designed to understand the way patients communicate and adapt accordingly. And, it promotes patient engagement by meeting healthcare consumers’ expectations for a seamless experience. Call 540-516-3602 to start your interactive demo of this solution using test patient Sara Morales D.O.B. 7/20/81.