Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE

πŸ‡§πŸ‡· PT πŸ‡ΊπŸ‡Έ EN

2026-04-28 00:00

← Back

Executive Summary

Cybersecurity researchers have disclosed details of a critical security flaw impactingLeRobot, Hugging Face's open-source robotics platform withnearly 24,000 GitHub stars, that could be exploited to achieve remote code execution. The vulnerability in question isCVE-2026-25874(CVSS score: 9.3), which has been described as a case of untrusted data deserialization stemming from the use of theunsafe pickle format. "LeRobot contains an unsafe deserialization vulnerability in the async inference pipeline, where pickle.loads() is used to deserialize data received over unauthenticated gRPC channels without TLS in the policy server and robot client components," according to aGitHub advisoryfor the flaw. "An unauthenticated network-reachable attacker can achieve arbitrary code execution on the server or client by sending a crafted pickle payload through the SendPolicyInstructions, SendObservations, or GetActions gRPC calls." According to Resecurity, the problem isrootedin the async inference PolicyServer component, allowing an unauthenticated attacker who can reach the PolicyServer network port to send a malicious serialized payload and run arbitrary operating system commands on the host machine running the service. The cybersecurity company said the vulnerability is "dangerous" as the service is designed for artificial intelligence inference systems, which tend to run with elevated privileges to access internal networks, datasets, and expensive compute resources. Should the flaw be...

Details

Cybersecurity researchers have disclosed details of a critical security flaw impactingLeRobot, Hugging Face's open-source robotics platform withnearly 24,000 GitHub stars, that could be exploited to achieve remote code execution. The vulnerability in question isCVE-2026-25874(CVSS score: 9.3), which has been described as a case of untrusted data deserialization stemming from the use of theunsafe pickle format. "LeRobot contains an unsafe deserialization vulnerability in the async inference pipeline, where pickle.loads() is used to deserialize data received over unauthenticated gRPC channels without TLS in the policy server and robot client components," according to aGitHub advisoryfor the flaw. "An unauthenticated network-reachable attacker can achieve arbitrary code execution on the server or client by sending a crafted pickle payload through the SendPolicyInstructions, SendObservations, or GetActions gRPC calls." According to Resecurity, the problem isrootedin the async inference PolicyServer component, allowing an unauthenticated attacker who can reach the PolicyServer network port to send a malicious serialized payload and run arbitrary operating system commands on the host machine running the service. The cybersecurity company said the vulnerability is "dangerous" as the service is designed for artificial intelligence inference systems, which tend to run with elevated privileges to access internal networks, datasets, and expensive compute resources. Should the flaw be exploited by an attacker, it could enable a wide range of actions, including - VulnCheck security researcher Valentin Lobstein, whodiscoveredandpublished additional details of the shortcominglast week, said it has been successfully validated against LeRobot version 0.4.3. The issue currently remains unpatched, with a fixplannedinversion 0.6.0. Interestingly, the same flaw was independentlyreportedby another researcher who goes by the online alias "chenpinji" sometime in December 2025.

The LeRobot team responded earlier this January, acknowledging the security risk and noting "that part of the codebase needs to be almost entirely refactored as its original implementation was more experimental." "That said, LeRobot has so far been primarily a research and prototyping tool, which is why deployment security hasn't been a strong focus until now," Steven Palma, tech lead of the project, said. "As LeRobot continues to be adopted and deployed in production, we’ll start paying much closer attention to these kinds of issues. Fortunately, being an open-source project, the community can also help by reporting and fixing vulnerabilities." The findings once again expose the dangers of using the pickle format, as it paves the way for arbitrary code execution attacks simply by loading a specially crafted file. "The irony here is hard to overstate," Lobstein noted. "Hugging Face created Safetensors -- a serialization format designed specifically because pickle is dangerous for ML data. And yet their own robotics framework deserializes attacker-controlled network input with pickle.loads(), with# nosec commentsto silence the tool that was trying to warn them." Learn how to stop patient zero attacks before they bypass detection and compromise your systems at entry points. Learn how to validate real attack paths and reduce exploitable risk with continuous agentic security validation. Get the latest news, expert insights, exclusive resources, and strategies from industry leaders – all for free.