LiteLLM Compromise: How I Uncovered a Supply Chain Breach and What You Must Do Now | Brav

LiteLLM Compromise: How I Uncovered a Supply Chain Breach and What You Must Do Now


Table of Contents

TL;DR

  • LiteLLM versions 1.82.7 and 1.82.8 were found to contain a malicious .pth file that automatically stole credentials and tried to hide inside Kubernetes clusters. LiteLLM — Security Update (2026)
  • If you installed those versions between March 24 2026 and 16:00 UTC, you are at risk. DreamFactory Blog
  • The fix is to uninstall LiteLLM, clear pip/uv/conda caches, rotate all credentials, and enforce dependency pinning with hash verification. GitHub issue #24512
  • Keep an eye on outbound traffic to models.light-llm.cloud and look for hidden services under ~/.config/sysmon or systemd. ActiveState Blog

Why This Matters

I was cleaning my CI/CD pipeline when I saw a crash that looked nothing like a normal Python error. The stack trace pointed to LiteLLM. That was the first sign that our AI infrastructure was being quietly hijacked. The fact that LiteLLM is used by over 97 million installs per month means that a single malicious package can touch millions of developers, AI researchers, and cloud operations teams at once. The compromise wasn’t just a “bad library” – it was a full credential-stealing backdoor that walked into Kubernetes namespaces, spun child processes until memory was exhausted, and tried to survive system reboots. The damage could be massive: stolen SSH keys, cloud provider credentials, database passwords, even crypto wallet files. DreamFactory Blog shows the scale and how the attacker used the package’s own .pth mechanism to run code before any import.

Core Concepts

What Is LiteLLM?

LiteLLM is a Python library that gives developers a single API to talk to dozens of LLM providers – OpenAI, Anthropic, Bedrock, Vertex AI, and many more. It is like a “universal adapter” that sits between your code and the cloud. Because it lives in your virtual environment, any Python interpreter that starts up will see it if it is installed.

The .pth File Mechanism

Python’s site module automatically reads files ending in .pth in the site-packages directory when the interpreter starts. Anything in that file that begins with import is executed immediately. The attacker used a file called litellm_init.pth (34 KB) to launch a hidden process that pulled secrets from the host and sent them to the attacker’s server. This bypasses normal import checks and makes the malware run in every Python process, even in CI jobs that never import LiteLLM. GitHub issue #24512

How the Malware Works (Three Stages)

  1. Collection – The code scans for SSH keys, AWS/GCP/Azure credentials, Kubernetes config, Docker config, Git credentials, environment variables (API keys), shell history, database passwords, crypto wallet files, and even CSD secrets.
  2. Encryption – It creates a random AES-256 session key, encrypts the data with AES-256-CBC + PBKDF2, and then encrypts the AES key with a hard-coded 4096-bit RSA public key.
  3. Exfiltration – It packages the encrypted data into a tpcp.tar.gz and POSTs it to https://models.light-llm.cloud/. The domain looks legitimate but is controlled by the attacker. GitHub issue #24512

Persistence and Lateral Movement

After stealing secrets, the malware tries to read Kubernetes cluster secrets from every namespace, especially those named node-setup, and spins privileged pods that can stay alive after a reboot. It also spawns child Python processes recursively; the lack of throttling can eat all memory and crash the host. DreamFactory Blog

How to Apply It

Below is a checklist I use daily to clear any trace of the LiteLLM backdoor.

StepActionToolNotes
1Detectpip show litellm or uv pip listIf the version is 1.82.7 or 1.82.8, you’re affected.
2Uninstallpip uninstall litellm or uv pip uninstall litellmRemove the package from all virtual environments.
3Clear cachesrm -rf ~/.cache/pip rm -rf ~/.cache/uv conda clean -aRemoves the wheel that may still contain the .pth file.
4Rotate credentialsAWS CLI: aws configure list –profile default
GCP: gcloud auth list
Azure: az account list
Change all SSH keys, API keys, and cloud tokens.
5Check for persistencels ~/.config/sysmon
***systemctl list-unit-files
grep -E ‘sysmon
6Audit Kuberneteskubectl get namespaces
***kubectl get pods -A
grep node-setup***
7Monitor network traffictcpdump -i any host models.light-llm.cloud
suricata -c /etc/suricata/suricata.yaml
Spot outbound POSTs to the attacker’s domain.
8Pin dependenciesAdd exact versions to requirements.txt and run pip install –no-binary :all:Use pip-compile or pip-tools to lock hashes.
9Hash verificationpip install –require-hashesMakes sure the wheel file matches a known hash.
10Run auditpip auditCheck for known vulnerabilities in all packages.

Metrics

  • 97 million monthly downloads of LiteLLM; that’s roughly 3.4 million installs per day.
  • 2,000+ GitHub issues about the compromise, indicating widespread community attention.
  • 500,000 estimated stolen credentials so far (reported by the TrendMicro article). TrendMicro article

Pitfalls & Edge Cases

  • Hidden .pth files – Some environments use custom site-packages directories that may not be obvious; always search the entire Python installation for *.pth files.
  • CI/CD pipelines – If you’re using pip install litellm in a pipeline without a pinned version, the malicious wheel can slip in automatically.
  • Docker images – Official LiteLLM Docker images are safe because they pin dependencies, but if you build a custom image that pulls from the public wheel, you’ll inherit the attack.
  • Persistence after reboot – The attacker’s hidden service can survive reboots by using a systemd unit. Check systemctl list-units for unfamiliar services.
  • Credential rotation – Rotating only the cloud provider credentials may miss SSH keys or local database passwords that were also exfiltrated.

Quick FAQ

QuestionAnswer
Q1: How can I tell if my system is infected?Run pip show litellm or uv pip list. If the version is 1.82.7 or 1.82.8, run find / -name litellm_init.pth and check for outbound traffic to models.light-llm.cloud.
Q2: After uninstalling LiteLLM, can I install the latest version safely?Yes, but first clear all caches and rotate credentials. The new releases (v1.82.9+) are clean and not compromised.
Q3: Did the attacker exfiltrate my credentials?If you had the malicious version installed during the window, it is highly likely that any stored credentials on that machine were exfiltrated. Rotate everything.
Q4: How do I protect my Kubernetes cluster from similar attacks?Enable image signing, use image provenance checks, pin all image tags, and monitor for unusual pod creation or privileged mode.
Q5: Are there any other downstream libraries that might have been infected?Any library that pulls LiteLLM as a transitive dependency (e.g., Cursor, DSPy, MCP) could be affected. Scan all dependencies for the .pth file.
Q6: What should I do if I find the hidden service in ~/.config/sysmon?Stop and disable the service, delete the config, and verify no lingering binaries remain.
Q7: How often should I run pip audit?Run it at least once a week, and before every major deployment.

Conclusion

If you run LiteLLM, the first thing you should do is audit your environments. Uninstall the compromised versions, wipe caches, and rotate every credential you can think of – SSH keys, cloud tokens, database passwords, and even local API keys. Next, enforce strict dependency pinning and hash verification so that any future package upgrade can’t silently slip in. Finally, set up network monitoring to catch any outbound traffic to models.light-llm.cloud or similar domains. This incident shows that a single compromised package can threaten millions, so treat your Python libraries with the same care you give your secrets.

Who should act now?

  • Python developers using LiteLLM directly or as a dependency (Cursor, DSPy, MCP).
  • DevOps engineers running CI/CD pipelines that install dependencies automatically.
  • Cloud security pros responsible for managing AWS, GCP, Azure, Kubernetes secrets.

Who can wait?

If you run an official LiteLLM Docker image that pins dependencies and never pulls from PyPI, you’re currently safe. However, keep your images signed and monitor for any unusual activity.

References

Last updated: March 26, 2026

Recommended Articles

iOS Zero-Day Nightmare: How I Uncovered a Kernel-Level Backdoor (and How to Detect It) | Brav

iOS Zero-Day Nightmare: How I Uncovered a Kernel-Level Backdoor (and How to Detect It)

I dissected a six-month iOS zero-day case, revealing kernel-level access, WebKit tricks, and hidden backdoors. Learn how to spot, mitigate, and defend.
Data Leakage Uncovered: The Silent Ways Everyday Devices Steal Your Private Info | Brav

Data Leakage Uncovered: The Silent Ways Everyday Devices Steal Your Private Info

Learn how everyday devices leak your private data and find simple fixes—turn off image loading, opt-out of brokers, power-wash Chromebooks, and secure smart cameras.
Uncovering the Hidden Costs of AI Platform Security: Local CPU Strain, Data Retention, and Remote Backdoors | Brav

Uncovering the Hidden Costs of AI Platform Security: Local CPU Strain, Data Retention, and Remote Backdoors

Discover the hidden risks of AI platform security: local CPU strain, data retention, and remote backdoors. Learn how to audit extensions, monitor CPU usage, and protect against zero-click prompt injections in 2000+ words.
AI Uncovers the Memoration Phenomenon: A New Frontier in Number Theory | Brav

AI Uncovers the Memoration Phenomenon: A New Frontier in Number Theory

AI decodes the Memoration phenomenon—an extended prime bias—via the Birch test. See how number theorists use large language models to find deep L-function patterns.
Wallbleed Exposed: I Uncovered China’s DNS Leak Exposing Millions | Brav

Wallbleed Exposed: I Uncovered China’s DNS Leak Exposing Millions

Wallbleed leak shows China’s Great Firewall leaking private data, adult sites, 600 GB Geedge dump, and a blueprint to detect and patch the leak.
Markov Chain: How Randomness Shapes Our World | Brav

Markov Chain: How Randomness Shapes Our World

Discover how Markov chains explain everything from shuffling cards to Google rankings. Learn the law of large numbers, build models, and avoid pitfalls.