White label video solution
Trainable AI Chatbot
White label messaging app
White label telehealth
AI medical assistant
Tools to build your own HIPAA telehealth app
Secure hosting with encryption and BAA
QuickBlox Discord
Community
In most US healthcare contexts, an AI medical assistant that handles patient data will be subject to HIPAA — a legal requirement, not an optional best practice. What healthcare teams should verify during vendor evaluation is that the compliance requirements for an AI system go significantly beyond what’s required for a messaging platform or hosting environment. The AI service layer — the component that interprets patient inputs, generates outputs, and may orchestrate across multiple systems — should be included in the organization’s contractual, security, and risk-review framework across all relevant data flows. A HIPAA-compliant hosting environment alone is not sufficient if the AI service itself handles PHI without the appropriate contractual coverage and controls in place.
In simple terms, a HIPAA-compliant hosting environment alone is not enough — compliance must extend to all components and data flows where PHI is handled, and this is worth verifying explicitly rather than assuming.
At QuickBlox, we provide AI medical assistant infrastructure for telehealth platforms and digital health developers — which means the compliance questions this page addresses are ones we work through with healthcare teams on every deployment. The gaps we see most often are not about intent but about what teams didn’t know to verify.
In most cases, yes — but it depends on the facts. An AI medical assistant that creates, receives, maintains, or transmits protected health information (PHI) on behalf of a covered entity will typically fall within HIPAA’s scope as part of a business associate relationship, triggering a Business Associate Agreement (BAA) and safeguards requirements. Whether a specific AI product is subject to HIPAA depends on who is using it, what data it handles, and in what role.
This may apply whether the AI system is patient-facing or clinician-facing, whether it is embedded in a larger platform or deployed as a standalone tool, and whether it processes PHI directly or receives it through an integration with another system.
In January 2025, HHS published a proposed update to the HIPAA Security Rule — the first major revision in 20 years. The proposal includes a requirement for a written technology asset inventory and network map covering all systems that create, receive, maintain, or transmit ePHI. While the proposal does not explicitly single out AI systems by name, it reflects HHS’s clear intent to modernize the Security Rule for current technologies — including AI-related environments where ePHI increasingly flows through processing layers that earlier HIPAA guidance did not anticipate. Whether or not the proposed rule is finalized in its current form, the direction of regulatory travel is unambiguous: AI systems handling PHI face increasing scrutiny and should be included in any organization’s HIPAA risk analysis and asset inventory.
For a full explanation of what HIPAA requires and who it applies to, see What Is HIPAA Compliance?
HIPAA compliance for an AI medical assistant involves three related but distinct questions: whether HIPAA applies to the AI system at all, whether appropriate BAAs are in place covering all relevant services, and whether the right technical and administrative safeguards are implemented across all data flows where PHI is handled.
HIPAA compliance requires the same foundational framework as any healthcare technology — BAA coverage, technical safeguards, access controls, audit logging, and breach notification procedures — but applied across all components and data flows where PHI is handled, which for AI systems introduces requirements that standard infrastructure compliance does not always address.
| Requirement | What it means for an AI medical assistant |
|---|---|
| Business Associate Agreement | The BAA should cover all services where the vendor creates, receives, maintains, or transmits PHI on your behalf, including any AI-related services that perform those functions — not just the hosting environment or messaging infrastructure. |
| Technical safeguards | Encryption of PHI in transit and at rest through the AI system; access controls on what data the AI can query; integrity controls on AI-generated outputs |
| Minimum necessary standard | The AI system must access only the PHI strictly necessary for its defined function — not entire patient records unless the function requires it |
| Audit logging | Every AI interaction involving PHI must be logged and reviewable — including inputs, outputs, and any actions the system takes autonomously |
| Breach notification | If the AI system is involved in a PHI breach, notification obligations apply — including where the breach originates in AI-generated outputs rather than data transmission |
| Risk analysis inclusion | The 2025 HHS proposed Security Rule update would require a written inventory and network map for systems that handle ePHI — which would likely include AI systems where relevant |
The most consequential and most common compliance gap in AI medical assistant deployments is this: organizations assume that a HIPAA-compliant hosting environment covers the AI system operating within it. It doesn’t.
A hosting provider that signs a BAA guarantees the security of the infrastructure — the servers, the encryption of data at rest, the network controls. It does not cover what happens when patient data flows through an AI processing layer — the NLP interpretation, the LLM inference, the generation of clinical outputs — because those processes happen in a separate system with its own data handling, its own potential failure points, and its own compliance obligations.
The practical consequence: an organization can have HIPAA-compliant hosting, a HIPAA-compliant messaging API, and a HIPAA-compliant video layer — and still be operating an AI medical assistant that is not covered under any BAA, because the AI processing component was evaluated separately and the compliance gap between systems was never identified.
The 2025 HHS proposed Security Rule would strengthen requirements for entities and systems handling ePHI — meaning vendors and covered entities should be reassessing their security controls and compliance posture as the regulatory landscape evolves.
The question healthcare teams should be asking every AI vendor is not “are you HIPAA compliant?” — most vendors will say yes, whether or not their specific services involve PHI. The more useful question is: “Which specific services involving PHI are covered under your BAA, and what safeguards and controls apply to each?”
Standard AI medical assistants introduce additional compliance considerations — beyond the hosting and messaging infrastructure — wherever the AI service itself handles PHI. Agentic AI systems introduce further complexity.
An agentic AI that orchestrates across patient intake, EHR integration, scheduling, and follow-up messaging connects to multiple systems simultaneously. Each of those systems may handle PHI, depending on the workflow. Where PHI is involved, appropriate contractual coverage and safeguards are required. And the orchestration layer that connects them — the agentic AI itself — introduces a data flow that may not be covered by any of the individual system agreements.
This is new regulatory territory. The existing HIPAA framework was not designed with autonomous multi-system AI orchestration in mind, and guidance specific to agentic AI compliance is still developing. What is clear is that organizations deploying agentic AI should map the full data flow across every connected system and verify that appropriate contracts and safeguards are in place at each point — not just for the primary platform.
For a full treatment of what agentic AI means for healthcare workflows, see Agentic AI in Healthcare: From Chatbots to Autonomous Workflows.
Five questions every healthcare team should ask before deploying an AI medical assistant:
Request a specific answer, not a general assurance. The BAA should cover all services where the vendor creates, receives, maintains, or transmits PHI on your behalf, including any AI-related services that perform those functions.
If PHI is used to train or fine-tune AI models, additional compliance obligations apply. Many AI vendors use interaction data for model improvement unless explicitly opted out — verify this in writing.
The AI should access only the PHI required for its defined function. Ask vendors to specify what patient data the AI queries and why each data type is necessary.
Logging must capture AI inputs, outputs, and any actions taken — not just access to the underlying data. Verify that logs are complete, retrievable, and retained for the required period.
If the AI orchestrates across EHR, scheduling, and messaging systems, confirm that appropriate contractual coverage and safeguards extend to every system it connects to where PHI is handled — not just the primary platform.
The compliance question we hear most often from healthcare teams evaluating AI medical assistants isn’t about the HIPAA framework itself — they already know the rules apply. The harder question is architectural: how do you ensure that contractual coverage and safeguards extend across the AI service layer, the messaging infrastructure, and the hosting environment as a coherent whole, rather than leaving gaps between separately-evaluated components?
QuickBlox’s AI agent platform is covered under a single BAA that extends across the AI layer, messaging infrastructure, and hosting environment — addressing the fragmentation risk described above. The compliance architecture is designed for the system as a whole, not assembled piecemeal as components are added. If you’re working through HIPAA compliance for an AI medical assistant deployment, we’re happy to walk through the architecture with you.
This page provides general information about HIPAA compliance considerations for AI systems in healthcare. It does not constitute legal advice. Organizations should consult qualified legal counsel for guidance specific to their circumstances.
Yes. Any AI system that creates, receives, maintains, transmits, or interacts with protected health information on behalf of a covered entity is classified as a business associate under HIPAA and must operate under a signed BAA with appropriate technical safeguards. This applies regardless of whether the AI is patient-facing or clinician-facing.
Consumer ChatGPT is not designed for HIPAA-compliant use and should not be used with PHI. OpenAI offers enterprise healthcare-oriented offerings, and any HIPAA-compliant use depends on the specific product, contract, and configuration in place. A vendor's willingness to sign a BAA is the starting point, not the end point — the organization remains responsible for how the system is configured, governed, and used.
A HIPAA-capable tool provides the technical features — encryption, access controls, audit logging, BAA availability — that make compliant deployment possible. A HIPAA-compliant deployment requires that those features are correctly configured, that the BAA is in place and covers the right components, and that the organization's governance and operational practices meet HIPAA's requirements. Vendors provide capability; compliance requires both vendor and organization to fulfill their respective obligations.
No. A BAA with a hosting provider covers the infrastructure — servers, network, data at rest. It does not cover an AI processing layer operating on the same infrastructure. The AI system requires its own BAA coverage, separate from and in addition to the hosting agreement.
HIPAA breach notification obligations apply. The covered entity must notify affected individuals, HHS, and in some cases media outlets, within defined timeframes. Civil penalties range from $100 to $50,000 per violation depending on culpability, with annual caps per violation category. Criminal penalties apply in cases of knowing violations. The business associate — the AI vendor — also carries direct liability under HIPAA.
Agentic AI systems that orchestrate across multiple systems introduce compliance complexity beyond a single-tool deployment. Each system the agent connects to handles PHI and requires its own BAA coverage. The orchestration layer itself — the agentic AI — must also be covered. Healthcare teams deploying agentic AI should map the full data flow across every connected system and verify BAA coverage at each point explicitly. See Agentic AI in Healthcare for more on what this means in practice.
Yes, if those administrative tasks involve PHI. Scheduling, billing, prior authorization, and intake workflows all commonly involve PHI. An AI system handling any of these functions on behalf of a covered entity is a business associate and requires a BAA.
Last reviewed: April 2026
Written by: Gail M.
Reviewed by: QuickBlox Compliance & Security Team