Research points to the advantages of employing artificial intelligence (AI) in healthcare. For example, according to Frost and Sullivan’s estimates, the implementation of AI is projected to enhance patient outcomes by 30 to 40 percent while reducing treatment costs by half.
Some, however, remain skeptical about the once “futuristic” technology. Although AI handles repetitive tasks, enabling medical practice staff to focus on patient care, detractors believe the technology will replace humans.
That certainly isn’t the case, as AI can’t replace human judgment, context or compassion. It’s a powerful support tool for healthcare professionals already experiencing burnout because of staffing shortages and a bevy of administrative burdens.
Other apprehension centers on the security surrounding the use of AI in healthcare. In this blog, we present top myths about the security of AI in healthcare and what steps physician practices can take to ensure protected health information (PHI) and other sensitive data stays secure.
Myth #1: AI Tools Are Intrinsically HIPAA-Compliant
Many healthcare professionals assume that any AI solution marketed for healthcare use is automatically HIPAA-compliant. This assumption, however, can expose practices to costly compliance risks.
Many commercial AI platforms use customer data for model training, which directly violates HIPAA if PHI is involved and no strict Business Associate Agreement (BAA) exists. Under the HIPAA Privacy Rule, covered entities and their business associates must obtain appropriate authorizations to use PHI for purposes such as training AI technologies.
HIPAA’s Security Rule applies to AI systems that store, generate or process ePHI, even though the rule predates AI. Every AI tool that ingests, processes, generates or stores patient data must comply with the same administrative, physical and technical safeguards as any other healthcare IT system.
Any AI vendor that creates, receives, maintains or transmits ePHI on your behalf is a business associate, resulting in the need for a signed BAA before any PHI transmission. No AI tool is “HIPAA certified.” Phrases like “HIPAA ready” or “HIPAA eligible” only mean a provider claims they will sign a BAA.
AI technology becomes a security vulnerability if implemented without encryption, access controls, audit logging and governance. To achieve HIPAA compliance when using AI, healthcare practices should ensure these conditions are met:
- Signed BAA: Any vendor processing PHI must sign a legally binding agreement before any patient data touches their system.
- Technical safeguards: End-to-end encryption, role-based access controls, multifactor authentication and detailed audit logs should be in place.
- Administrative controls: AI-specific policies, documented use cases, workforce training on prohibited uses and human oversight requirements should be implemented.
- Data minimization: AI only acquires data elements required for specific functions; appointment reminders don’t need full chart access.
- Vendor selection: Confirm infrastructure meets HIPAA Security Rule standards through SOC 2 or HITRUST alignment.
Myth #2: AI Security is Solely an IT Problem
Some physician practices still route AI risk only to their IT departments, but any security failures in healthcare artificial intelligence don’t only affect data privacy. They directly impact patient safety and clinical outcomes.
AI security in healthcare is primarily a patient safety issue. Misconfigurations or data breaches lead to delayed care, incorrect recommendations or medication errors.
At $7.42 million, healthcare recorded the highest average breach cost among industries for 12 consecutive years. Breaches not only lead to identity theft and fraud but also jeopardize patient safety, damage trust and risk regulatory penalties and operational disruption.
The use of AI in healthcare requires substantial datasets, increasing the risk of data breaches. More than 97 percent of organizations that reported an AI-related security incident lacked proper AI access controls, and 63 percent of organizations lack AI governance policies. Healthcare practices should integrate AI oversight into existing risk management and patient safety frameworks.
Myth #3: Current Regulations Such as HIPAA Fully Cover AI Risks
Some healthcare practices assume that HIPAA guarantees safe AI deployment. Although PHI is subject to HIPAA regulations, healthcare providers must implement strict security measures to protect the integrity, confidentiality and availability of PHI when using AI technologies. This includes access controls, encryption and continuous monitoring.
Appropriate safeguards should include:
- Robust authentication and role-based access that ensures only authorized personnel access PHI
- Encryption and secure transmission of patient data
- Firewalls and ongoing monitoring of AI-related PHI use
- Physical security controls for systems housing AI and PHI
Compliance depends entirely on specific configuration, data handling practices and contractual safeguards. Vendor marketing phrases like “HIPAA ready” or “secure by design” don’t replace a medical practice’s obligation to perform due diligence and risk assessments.
On January 6, 2025, the HHS Office for Civil Rights proposed the first major update to the HIPAA Security Rule in 20 years, introducing stricter expectations for risk management, encryption, and resilience for AI systems processing protected health information. It’s important for practices to proactively interpret and apply these standards instead of waiting for AI-specific regulations.
Myth #4: AI Bias Doesn’t Apply to the Healthcare Industry
A term initially defined by members of the co-directors of the Applied Artificial Intelligence for Health Care program at the Harvard T.H. Chan School of Public Health, AI bias describes how artificial intelligence technology can be negatively impacted by social, economic and systemic biases.
Inaccurate or biased outputs from AI algorithms can lead to unequal access to care, misdiagnosis or under-treatment for already underserved groups. A perspective paper by Stanford University researchers theorized that, to ensure the safe and effective use of AI in healthcare, researchers and developers will need to work to eliminate bias in these tools by consistently evaluating them.
Providers and payers can play an important part in reducing inequities within the healthcare system and improving patient outcomes within the clinical setting by deploying alternative patient outreach strategies that enable them to care for patients experiencing healthcare disparities in their communities, homes and jobs. Once they understand the individual needs of their patients, they can conduct patient outreach by directing patients to the most relevant resources that can meet those needs.
Practical Tips for Using AI Securely in Your Practice
Healthcare practices are already using AI to automate administrative tasks. Best practices for healthcare providers to securely use AI include:
Technical Requirements
- Ensure encrypted data storage and transmission for any AI system using PHI.
- Implement robust security controls including role-based access, multifactor authentication and access control.
- Maintain detailed audit logs tracking who used AI tools, what PHI was accessed, and what outputs were generated.
Data Handling
- Choose vendors that don’t train AI models on PHI, and ensure they don’t conduct training using protected health information.
- Limit PHI exposure using de-identified or limited datasets whenever a use case doesn’t require full identifiers.
- Protect PHI by following the minimum necessary principles
Policies
- Create written policies explicitly covering AI use.
- Establish a unified AI governance structure to mitigate risks associated with Shadow AI (the unauthorized use of AI tools within an organization).
Training
- Implement regular programs educating employees about AI risks, regulatory obligations and best practices.
- Cover uses of PHI in AI technology, risks of HIPAA non-compliance and dangers of social engineering attacks.
Transparency
- Update Notice of Privacy Practices describing PHI uses in AI technology.
- Require your business associates to provide clear documentation outlining their PHI uses in AI.
Providertech.ai: A Secure AI Choice for Healthcare Practices
Providertech.ai is a healthcare-first agentic AI platform that prioritizes security standards, including encrypted communications, role-based access control, comprehensive audit logging and resource allocation optimization. It was built by healthcare professionals with many years of experience and reduces staff burden by automating administrative tasks and allowing practice staff to focus on patient care.
Our environments and your data run on Azure’s HITRUST-certified infrastructure leveraging Azure blueprints and security technologies such as two-factor authentication, IAM access control management, encryption at rest and in transit and more. Listen to a sample recording of Providertech.ai, or contact us today to learn more!