Read Time:2 Minute, 18 Second

The rise of LLMjacking, a sophisticated cyberattack targeting large language models (LLMs), has sparked growing concerns among enterprises relying on AI-driven cloud services. This technique, which involves the theft and misuse of API keys to exploit cloud-hosted LLMs, has recently expanded to include platforms like DeepSeek, highlighting the evolving risks in AI ecosystems.

What is LLMjacking?

LLMjacking refers to the unauthorized use of stolen cloud credentials to gain access to LLMs for malicious purposes. Attackers exploit these credentials to:

  • Run unauthorized queries on AI models.
  • Enable new models to ramp up costs for victims.
  • Monetize access by reselling it on underground markets.

This attack method is particularly dangerous because it often goes unnoticed until significant financial or operational damage has occurred. For instance, attackers commonly use reverse proxies, such as oai-reverse-proxy or one-api, to manage stolen credentials while avoiding detection.

DeepSeek and the new wave of attacks

DeepSeek, a rising star in the AI landscape, has become a recent target for LLMjacking. Its API keys were quickly incorporated into illicit activities, underscoring how attackers adapt to emerging platforms. These stolen credentials are often leveraged to bypass content filters or execute high-volume model invocations without bearing the associated costs. In one observed case, attackers used compromised API keys from DeepSeek to power AI tools for generating prohibited content, including explicit material. Over 75,000 malicious model invocations were recorded in just two days, demonstrating the scale and profitability of such operations.

How attackers operate

LLMjacking attacks typically follow these steps:

  1. Initial Access: Credentials are stolen via vulnerabilities (e.g., CVE-2021-3129 in Laravel) or misconfigurations in cloud environments.
  2. Validation: Attackers test stolen credentials using legitimate API calls (e.g., InvokeModel) with unconventional parameters like max_tokens_to_sample = -1 to assess access privileges without triggering alarms.
  3. Exploitation: Once validated, credentials are used for unauthorized LLM queries or sold on black markets like “LLM Paradise,” where GPT-4 and Claude API keys have been sold for as little as $15.

The broader implications

The consequences of LLMjacking extend beyond financial losses:

  • Data Exposure: Attackers can access sensitive corporate data stored within compromised accounts, including intellectual property and personal information.
  • Reputation Damage: Companies targeted by these attacks may face public backlash if their systems are used for generating harmful content.
  • Operational Disruption: High-volume misuse can strain cloud resources, impacting legitimate users and services.

Mitigating the risks

To combat LLMjacking, organizations must adopt robust security practices:

  • Credential Management: Rotate API keys regularly and restrict their scope to minimize exposure.
  • Logging and Monitoring: Enable invocation logging to detect unusual activity early.
  • Vulnerability Patching: Address known software vulnerabilities promptly to prevent initial access by attackers.
  • Access Controls: Implement least privilege principles and monitor for unauthorized model activations or queries.