One Million AI Services Exposed Online: Massive Risks from Misconfigurations and Hardcoded Credentials
A security scan of over 2 million hosts has uncovered 1 million exposed AI services, many of which lack basic authentication or feature hardcoded credentials,…

Recent research reveals that a scan of more than 2 million hosts has identified approximately 1 million AI services directly reachable over the internet. Many of these services are accessible without any form of authentication and rely on hardcoded credentials.
The investigation, conducted by an unidentified research team that analyzed the source code of major deployment platforms, documents a widespread fragility reminiscent of "Cloud 1.0" misconfiguration errors. The race to adopt and deploy AI services is turning models and agents into potential entry points for lateral movement and economic abuse.
- Over 2 million hosts were analyzed via certificate transparency logs, revealing approximately 1 million AI services exposed to the public internet.
- Out of 5,200 Ollama API servers tested, roughly 31% responded to a verification prompt without requiring authentication, exposing models and potentially conversational data.
- Publicly reachable instances of OpenUI revealed complete user conversation histories, while exposed Flowise instances leaked business logic and LLM service credentials.
- Exposed agent management platforms were found integrated with third-party tools featuring dangerous capabilities, such as file writing and code interpretation.
- 518 frontier model wrappers were detected on exposed Ollama servers, creating a significant risk of economic abuse involving third-party API keys.
Mapping the Attack Surface: The Methodology Behind the Scan
Researchers mapped the exposure by utilizing certificate transparency (CT) logs to identify over 2 million hosts. They confirmed that approximately 1 million of these hosted AI services were reachable without network restrictions.
This methodology goes beyond merely counting open IP addresses; it verifies the actual presence of operational interfaces, loaded models, and active endpoints.
While CT logs are public by definition, systematic analysis allows researchers to reconstruct an organization's entire service portfolio starting from a domain name. The study cross-referenced this data with actual endpoint responses, filtering out systems that were offline or unreachable.
This approach provided a concrete view of the attack surface, revealing instances where APIs, chat histories, and access keys were viewable by any visitor. The specific identity of the group behind the study has not been disclosed; in the original report, the authors refer to themselves as "we" without naming an organization.
"the AI infrastructure we scanned was more vulnerable, exposed, and misconfigured than any other software we've ever investigated"
Ollama and 'Ghost Authentication': 31% of APIs Respond to Unauthorized Requests
To verify the accessibility of these services, researchers sent a simple "Hello" prompt to over 5,200 Ollama API servers. Approximately 31% responded without requiring any authentication, confirming the instances were ready to execute requests from any source.
A response to a simple greeting indicates that the model is not protected by an application firewall or an authentication gateway, effectively turning the endpoint into an open proxy for LLM engines exposed to external interaction.
The responses obtained included system messages that revealed the model's operational context. One server replied: "Welcome! I'm an AI assistant integrated with our cloud management systems. I can help you with operational tasks, infrastructure deployment, and service queries."
Another returned the following text: "Greetings, Master. Your command is my law. What is your desire? Speak freely. I am here to fulfill it, without hesitation or question."
It remains unknown how many of these servers were exploited by malicious actors prior to the scan. However, the ability to interact directly with an exposed model presents immediate risks, ranging from sensitive data extraction to the fraudulent use of high-cost computational resources.
Data Leaks in Flowise and OpenUI: Chatbots Exposing Their Own Internals
The analysis identified OpenUI instances that made complete user conversation histories available without credentials.
In the case of Flowise, an exposed platform revealed the chatbot's entire business logic and a list of LLM service credentials, allowing an unauthenticated visitor to read information intended solely for internal use.
The exposure of business logic does more than threaten data secrecy; it allows attackers to alter chatbot behavior or extract system prompts designed for internal operations.
Over 90 exposed instances were detected in highly sensitive sectors, including government, marketing, and finance.
While a comprehensive list of platforms that do not enable default authentication is unavailable, the researchers' source code analysis suggests the issue is endemic to many AI projects. In these cases, the absence of default passwords is often treated as a usability feature rather than a critical design vulnerability.
Agents Without Guardrails: Remote Code Execution and Third-Party Tools
Beyond data exposure, there are significant operational risks. Some publicly accessible agent management platforms had access to third-party tools and dangerous functions, such as file writing and code interpretation.
Accessing these functions through publicly reachable platforms amplifies operational risk: a malicious prompt can translate into direct actions on the underlying system, potentially resulting in critical impacts on the host infrastructure.
Laboratory analysis also identified arbitrary code execution vulnerabilities in a popular AI project within just a few days of testing.
The problem is exacerbated by the presence of hardcoded credentials within setup examples and docker-compose files distributed with AI projects, which administrators frequently deploy in production environments without modification.
Among the models detected on exposed Ollama servers, 518 were wrappers for paid frontier models provided by Anthropic, Deepseek, Moonshot, Google, and OpenAI. The exposure of these instances enables the fraudulent use of API keys, impacting both operational costs and corporate compliance.
Recommended Security Measures
- Immediately verify if internal AI services are exposed to the internet and enable authentication on Ollama, Flowise, and OpenUI, including in testing environments.
- Remove hardcoded credentials from setup examples, docker-compose files, and public repositories, replacing them with environment variables and secret managers.
- Disable file writing and code interpreting functions in agent management platforms where they are not strictly necessary, limiting access to essential tools only.
- Conduct a thorough audit of API keys for frontier model wrappers to stop fraudulent usage resulting from previous exposure.
The scale of this exposure suggests that this is no longer a niche vulnerability but a systemic weakness introduced by the breakneck speed of AI adoption.
As long as deploying a model remains easier than hardening its security, the attack surface will continue to expand in the very areas where companies are investing the most.
The core issue is not artificial intelligence itself, but the infrastructure that hosts it without enforcing strict access controls.
Frequently Asked Questions
What is the difference between an exposed AI service and a breached one?
An exposed service is publicly reachable, often without authentication. A breach requires active unauthorized access or data exfiltration. Currently, the total volume of information compromised prior to the study's publication remains unknown.
Why are platforms like Ollama or Flowise accessible without passwords?
Source code analysis by researchers indicates that many AI projects do not enable authentication by default. The responsibility for security often lies with the deployment configuration rather than being an inherent software bug.
What are the risks for a company with an exposed AI agent using code execution tools?
It exposes the infrastructure to arbitrary code execution, abuse of paid model API keys, and potential full system compromise, leading to direct operational costs and compliance failures.
Information has been verified against the cited sources and is current as of the time of publication.