Buyer's Guide

How to Choose a HIPAA-Compliant AI Tool for Your Medical Practice

AI Agent Brief may earn a commission through links on this page. This does not affect our rankings.

The AI tool your colleague recommended might generate brilliant clinical notes. The one you saw demoed at a conference might save two hours daily. But if either tool processes protected health information without proper HIPAA safeguards, your practice faces fines of up to $50,000 per violation, potential criminal penalties, and reputational damage that no time savings can offset.

HIPAA compliance isn’t a feature checkbox — it’s a set of technical, administrative, and contractual requirements that many AI vendors claim to meet without actually satisfying. A tool that encrypts data in transit but stores it unencrypted at rest is non-compliant. A tool that offers a Business Associate Agreement but doesn’t enforce access controls is non-compliant. A tool with a free tier that excludes PHI protections but doesn’t clearly communicate that limitation is a compliance trap.

This guide walks through the four steps to verify that any AI tool — scribe, chatbot, billing assistant, or scheduling platform — genuinely meets HIPAA requirements before you process a single patient record through it.

HIPAA Requirements for AI Tools: What Actually Matters

HIPAA’s Privacy Rule, Security Rule, and Breach Notification Rule apply to any AI tool that creates, receives, maintains, or transmits protected health information (PHI) on behalf of a covered entity (your practice or health system). In practical terms, this means every AI tool that touches patient data must satisfy three categories of requirements.

Administrative safeguards: The vendor must have documented policies and procedures for protecting PHI, designate a security officer responsible for compliance, conduct regular risk assessments, and train their workforce on PHI handling. You won’t see these directly as a buyer, but they’re foundational — a vendor without internal compliance procedures is a liability regardless of what their marketing page says.

Technical safeguards: The tool must implement access controls (only authorised users can access PHI), audit controls (logging who accessed what data and when), integrity controls (preventing unauthorised alteration of PHI), and transmission security (encryption of PHI in transit). For AI tools specifically, this also means securing the AI model itself — training data that contains PHI must be protected with the same rigor as stored records.

The Business Associate Agreement (BAA): This is the contractual cornerstone. Any vendor that processes PHI on your behalf is a Business Associate under HIPAA and must sign a BAA with your practice. The BAA specifies the vendor’s obligations for PHI protection, their liability in case of a breach, and the permitted uses of patient data. No BAA means no HIPAA compliance — regardless of what the vendor’s website claims about security.

What makes AI tools uniquely complex: Traditional healthcare software stores and retrieves data. AI tools process data through machine learning models, which introduces additional questions: Is patient data used to train the AI model? If so, is it de-identified first? Where does the audio or text data go during processing — to the vendor’s servers, to a third-party cloud provider, to an AI model hosted by another company (like OpenAI or Google)? Each data handoff creates a new compliance surface that must be covered by BAAs and security controls.

Step 1: Verify the Business Associate Agreement

Before evaluating features, pricing, or integration, confirm that the vendor will sign a BAA. This is the non-negotiable first step.

Request the BAA before the trial. Reputable healthcare AI vendors (Nuance DAX, Suki, Abridge, Notable Health) proactively offer BAAs as part of their sales process. If you have to ask for a BAA and the vendor seems confused by the request, that’s your first red flag. If the vendor says a BAA “isn’t necessary” for their tool, walk away — they either don’t understand HIPAA or are deliberately avoiding compliance obligations.

Read the BAA carefully. Key clauses to verify: the vendor’s obligations upon a data breach (notification timeline, remediation responsibilities), the permitted uses and disclosures of PHI (the vendor should only use PHI for the specific services they provide, not for model training, marketing, or sale to third parties), the vendor’s obligation to return or destroy PHI upon contract termination, and subcontractor management (if the vendor uses third-party cloud providers, those providers must also be covered).

Watch for free tier exceptions. Several AI tools offer free tiers that explicitly exclude BAA coverage. Nabla’s free tier, some chatbot platforms, and general-purpose AI tools (ChatGPT’s free tier, for example) may not include BAAs. This means you cannot use these free tiers with real patient data. Verify BAA availability for your specific plan tier before processing any PHI.

Document the BAA. Keep a signed copy of every BAA with every AI vendor. Maintain a register of all Business Associates and their BAA status. This documentation is required for HIPAA compliance audits and essential for breach response.

Step 2: Assess the Data Flow

Understanding where patient data goes — every stop, every server, every third-party — is critical for evaluating compliance risk. AI tools often have more complex data flows than traditional software because the AI processing may occur on different infrastructure than the data storage.

Map the data journey. Ask the vendor to provide a data flow diagram or written description covering: where audio or text data is captured (device, browser, dedicated hardware), where it’s transmitted for processing (vendor servers, cloud provider, AI model host), where processed output (clinical notes, summaries) is stored, whether any data is retained after processing (and for how long), and whether any data is used for AI model training or improvement.

Identify every third party. If the vendor uses Amazon Web Services for hosting, Google Cloud for AI processing, and OpenAI’s API for language generation, each of these represents a third-party data handler that must be covered by appropriate agreements. Ask: “Which third-party services process our patient data, and do you have BAAs or equivalent data protection agreements with each?”

Verify data residency. For practices with state-specific data residency requirements or institutional policies about data location, confirm where patient data is physically stored. US-based hosting (AWS US regions, Azure US regions) is standard for healthcare AI tools, but verify rather than assume — some vendors route data through international servers for processing.

Understand data retention. How long does the vendor retain patient data after processing? Some ambient scribes retain audio recordings temporarily (Abridge retains audio for 90 days), while others delete audio immediately after note generation. Longer retention provides documentation verification benefits but increases the data exposure surface. Confirm the vendor’s default retention period and whether you can configure it to match your practice’s policies.

Step 3: Check Certifications and Compliance Standards

Certifications provide independent verification that a vendor’s security claims are genuine — not just self-reported assertions.

SOC 2 Type II is the baseline certification to require. It verifies that the vendor has implemented and maintains security controls across five trust service criteria: security, availability, processing integrity, confidentiality, and privacy. SOC 2 Type II specifically requires that these controls have been tested over a period of time (typically 6–12 months), not just assessed at a point in time. All major healthcare AI platforms (Nuance DAX, Suki, Abridge, Nabla paid tiers) maintain SOC 2 Type II compliance.

HITRUST CSF is the gold standard for healthcare-specific security. HITRUST incorporates requirements from HIPAA, NIST, ISO 27001, and other frameworks into a single, healthcare-tailored certification. Achieving HITRUST certification requires significant investment — which is why it’s primarily found among enterprise platforms (Nuance DAX, Notable Health). For practices evaluating enterprise AI tools, HITRUST certification provides the strongest third-party assurance. For smaller practices evaluating lighter tools, SOC 2 Type II is a reasonable minimum.

Encryption standards. Verify AES-256 encryption for data at rest and TLS 1.2 or higher for data in transit. These are industry standard, and any healthcare AI tool should meet them. If a vendor can’t confirm these specific encryption standards, their security infrastructure is likely insufficient.

State-specific requirements. Some US states impose additional requirements beyond federal HIPAA: California’s CCPA/CPRA for consumer health data, Texas’s strict breach notification timelines, New York’s SHIELD Act for data security. If your practice operates in states with enhanced privacy requirements, verify that the vendor’s compliance extends to those state-specific obligations.

Step 4: Test With De-Identified Data First

Before processing real patient data through any AI tool, validate its functionality with data that carries no compliance risk.

Create test scenarios with synthetic data. Build fictional patient encounters — synthetic names, fabricated medical histories, invented clinical scenarios — that mimic the complexity of your real patient interactions. Run these through the AI tool to evaluate note quality, accuracy, specialty support, and workflow fit. This testing phase reveals whether the tool meets your clinical needs without exposing real PHI.

Evaluate note quality against your standards. The de-identified testing phase is where you determine whether the AI’s output meets your documentation standards. Are the notes clinically accurate? Do they follow your preferred structure? Do they capture the nuances of your specialty? Are there systematic errors or omissions? These questions are best answered with test data before you’re committed to a paid contract.

Validate the EHR integration. If the tool claims bidirectional EHR write-back, test it. Does the note appear in the correct chart section? Are the fields properly populated? Does the integration work consistently or intermittently? Integration failures discovered during production use — with real patient charts — are far more disruptive than failures discovered during testing.

Run a controlled pilot with PHI only after confirming: The BAA is signed, the data flow is understood and acceptable, the certifications are verified, and the de-identified testing confirms clinical quality. Then deploy with a small group of providers (3–5) for 2–4 weeks before expanding practice-wide.

Red Flags: Signs a Tool Isn’t Truly HIPAA-Ready

These warning signs should stop your evaluation immediately:

No BAA available — or reluctance to provide one. If the vendor doesn’t offer a BAA, they either don’t handle PHI (unlikely if the tool processes clinical information) or they’re avoiding the compliance obligations that HIPAA requires. Either way, you cannot use the tool with patient data.

Vague answers about data handling. Questions like “where is our data stored?” and “who processes it?” should receive specific, confident answers (AWS US-East, processed by our proprietary models, no third-party LLM access). Vague responses (“our cloud infrastructure,” “industry-standard security”) suggest the vendor either doesn’t know or doesn’t want to disclose their data handling practices.

AI model training on your data without explicit consent. Some AI tools use customer data to improve their models. This isn’t inherently wrong — many healthcare AI platforms improve through aggregate learning — but it must be disclosed, consented to, and covered by the BAA. If the vendor’s terms of service include a clause allowing them to use your data for model training without your explicit awareness, that’s a compliance and ethical concern.

No audit trail capability. HIPAA requires that covered entities maintain logs of who accessed PHI and when. If the AI tool doesn’t provide audit logs — or can’t tell you who within the vendor’s organisation accessed your patient data — it fails a fundamental HIPAA requirement.

Missing or outdated certifications. SOC 2 reports expire and must be renewed. Ask for the date of the most recent SOC 2 Type II report. If it’s more than 18 months old, the vendor may have let their certification lapse — which means their security controls haven’t been independently verified recently.

Frequently Asked Questions

Can I use ChatGPT or general AI assistants with patient data?

Not on free or standard tiers — these do not include BAAs and may use your data for model training. OpenAI’s Enterprise tier and API access with BAA coverage (available through specific healthcare partnerships) can be used with PHI under appropriate safeguards. However, most practices should use purpose-built healthcare AI tools (Nuance DAX, Suki, Nabla, Freed) that are designed for clinical workflows and include BAAs by default. Using general AI with patient data creates unnecessary compliance risk.

What happens if a vendor I use has a data breach?

Under the BAA, the vendor must notify you of any breach of unsecured PHI without unreasonable delay (and within 60 days at most). You then have obligations under HIPAA’s Breach Notification Rule: notify affected patients within 60 days, report to the HHS Office for Civil Rights, and for breaches affecting 500+ individuals, notify prominent media outlets. Your exposure depends on what safeguards were in place — practices that conducted proper vendor due diligence (verified BAAs, certifications, and data flow) are in a significantly stronger position than those that didn’t.

Is a BAA enough to guarantee HIPAA compliance?

No — a BAA is necessary but not sufficient. The BAA establishes the contractual obligations. Actual compliance requires that both you and the vendor implement the technical and administrative safeguards specified in the BAA. A signed BAA with a vendor that doesn’t encrypt data or maintain access controls provides no real protection. The BAA is the starting point; the verification steps in this guide confirm that the vendor actually delivers on its contractual commitments.

Back to Best AI Tools for Healthcare Providers in 2026: Documentation, Diagnosis, and Admin