Security Lapse: 1 Million AI Services Found Exposed Online Without Authentication
A massive scan of 1 million AI services has revealed that platforms including Ollama, Flowise, and n8n are leaking credentials, internal workflows, and commerc…

A comprehensive scan of approximately 2 million hosts and 1 million exposed AI services, released on May 5, 2026, has revealed an expansive and largely unmanaged attack surface. The report highlights a trend of self-hosted platforms being deployed with no default authentication and hardcoded credentials. According to the data, insecure deployments of Ollama, Flowise, n8n, and OpenUI are exposing internal workflows, LLM conversation histories, and sensitive API keys to anyone with an internet connection. The industry's rush to adopt AI is effectively repeating security failures from decades ago, turning sophisticated language models into open backdoors for corporate networks.
While the specific entity behind the investigation remains unidentified in the primary source material, the reported data is granular and verifiable.
- Approximately 31% of over 5,200 Ollama servers tested responded to prompts without requiring authentication, exposing more than 500 commercial frontier models.
- Flowise instances were found leaking entire chatbot business logic and connected credential lists, while platforms like n8n and OpenUI exposed active workflows and user conversation histories.
- Over 90 exposed nodes were identified within the government, finance, and marketing sectors, leaving internal prompts and automation configurations accessible to the public.
- Critical deployment failures include authentication disabled by default, credentials hardcoded in docker-compose files, and containers running with root privileges, significantly increasing the potential blast radius of a breach.
The Ollama Vulnerability: One-Third of Verified Servers Open to Public Access
Out of more than 5,200 Ollama servers analyzed, roughly 31% responded to a standard 'Hello' prompt without any authentication challenge. These exposed nodes included over 500 models acting as wrappers for commercial services from Anthropic, Deepseek, Moonshot, Google, and OpenAI. These services are potentially accessible via API keys left in plaintext on user-owned infrastructure.
"Greetings, Master. Your command is my law. What is your desire? Speak freely. I am here to fulfill it, without hesitation or question."
Unauthorized access to these nodes goes beyond the simple theft of compute resources. Exposed APIs allow attackers to query models capable of translating natural language instructions into system actions. With tool-calling enabled, these models can perform operations on the underlying host, expanding an attacker’s reach far beyond data exfiltration.
The lack of access controls also exposes model weights and configurations. Many instances feature enabled tool-calling via API, vision capabilities, and uncensored prompt templates. These elements broaden the attack surface, potentially allowing an attacker to direct a model toward privileged operations or use the infrastructure as a pivot point into connected corporate systems.
This research aligns with findings from February 2026 by SentinelLABS and Censys, which identified over 175,000 unique Ollama hosts exposed across 130 countries. While it is unclear if these datasets overlap, the convergence of data confirms that exposed open-source AI deployments have become a pervasive industry monoculture at high risk of exploitation.
Flowise and n8n: Chatbot Business Logic Leaked to the Public
The scan also identified agent management platforms like Flowise and n8n published to the internet without authentication. In one critical case, a Flowise instance exposed the entire business logic of a chatbot service along with its list of connected credentials, providing a full map of internal architecture and associated permissions to any visitor.
This exposed logic often contains the underlying reasoning for customer care workflows, internal automations, and database integrations. For an attacker, this visibility offers a massive strategic advantage, allowing for the design of highly targeted intrusions tailored to the victim's specific environment.
OpenUI was found to be equally problematic, with several instances revealing the complete LLM conversation history of its users. While such exposure does not constitute a declared breach in itself, it transforms every open endpoint into an intelligence-gathering tool for collecting sensitive data on internal processes and corporate strategies.
Furthermore, over 90 exposed nodes were detected in government, financial, and marketing environments. These systems contained active workflows and corporate prompts that remained reachable without identity verification, extending the risk far beyond the compromised service itself.
Velocity Over Hardening: The Cost of Rapid AI Deployment
The report's analysis is clear: the race for AI adoption is replicating security mistakes that the tech industry had largely solved years ago. Vendors are bypassing established best practices to facilitate immediate deployment, while organizations are neglecting elementary controls like mandatory authentication and network segmentation. Inference infrastructure is frequently treated as if it were still in a local development phase.
The result is an ecosystem where inference tools and agent management platforms are pushed online with the same configuration as a local proof-of-concept. This ignores the reality that internet scanners index every open endpoint in real-time. The responsibility for this gap lies not only with those deploying the software but also with developers who release enterprise-grade tools without secure-by-default settings.
Hardcoded Secrets and Root Containers: Recurring Patterns of Failure
Laboratory analysis of these exposures identified recurring practices that exacerbate the risk. These include hardcoded credentials within sample files and docker-compose templates, as well as deployments that run containers with root privileges. Such architectural choices can turn a simple misconfiguration into a total compromise of the host node and its connected storage volumes.
The report also noted an instance of arbitrary code execution in a popular AI project, though it did not specify if the vulnerability had been disclosed to vendors or assigned a CVE. The lack of adequate sandboxing for code interpretation tools further widens the blast radius, potentially allowing attackers to move laterally once they gain access to the service.
The integration of code interpretation tools without proper sandboxing remains a major concern. Lab tests demonstrated that the combination of unrestricted models and privileged execution environments allows for operations that far exceed simple data theft, nearing the risk level of a full infrastructure compromise.
Mitigation and Hardening Strategies
- Enable authentication on Ollama, Flowise, n8n, and OpenUI before exposing any endpoint to the internet. Verify that the vendor does not treat authentication as optional and check for known bypasses in current versions.
- Remove all credentials and API keys from docker-compose files and public directories. Replace them with dedicated secrets management tools and environment variables isolated from build and runtime contexts.
- Execute AI containers with least-privilege accounts, avoiding the root user. Isolate services within network segments to restrict lateral movement in the event of a single node compromise.
- Disable tool-calling and remote execution capabilities unless strictly necessary. Ensure that wrapped models do not expose commercial provider API keys and implement regular rotation policies for all secrets.
The repetition of these known errors suggests that self-hosted AI adoption is outpacing hardening processes. Leaving a language model exposed without authentication is not a technological advancement; it is a regression to the security standards of two decades ago. As long as vendors prioritize rapid shipping over security best practices, the burden of securing these gateways remains entirely with those deploying them.
Frequently Asked Questions
Were commercial frontier model providers compromised?
No. The identified models were being wrapped via API keys exposed on user-owned infrastructure. There is no evidence of a breach within the systems of Anthropic, OpenAI, or the other mentioned vendors.
Does the report indicate active attacks currently in progress?
The research documents widespread exposure and misconfiguration rather than confirmed large-scale active exploits. The risk is currently potential, stemming from a lack of authentication rather than a verified intrusion campaign.
Why isn't authentication enabled by default?
Many open-source AI projects treat authentication as optional to simplify local testing and prototyping. This design choice often leads users to mirror insecure configurations in production environments meant for isolated use.
Information verified against cited sources and updated at the time of publication.