Programming

Hugging Face hosted malicious software masquerading as OpenAI release

Published byAIDaily Editorial Team
4 min read
Original source author: AI News

A malicious Hugging Face repository that posed as an OpenAI release delivered infostealer malware to Windows machines and recorded about 244,000 downloads before removal, according to research from AI security firm HiddenLayer. The number of downloads may have been artificially inflated by the attackers to make the model seem more popular, so the extent of […] The post Hugging Face hosted malicious software masquerading as OpenAI release appeared first on AI News .

Share:

A malicious Hugging Face repository that posed as an OpenAI release delivered infostealer malware to Windows machines and recorded about 244,000 downloads before removal, according to research from AI security firm HiddenLayer. The number of downloads may have been artificially inflated by the attackers to make the model seem more popular, so the extent of the effects of the attack is unknown. ‘Open-OSS/privacy-filter’ imitated OpenAI’s Privacy Filter release. HiddenLayer said the original model card had been copied nearly exactly, and the bad actors included a malicious loader.py file that fetched and ran credential-stealing malware on Windows hosts. The repos reached the top of the ‘trending’ list on Hugging Face with 667 likes accrued in less than 18 hours – again, this figure may have been changed by the attackers. Public AI model registries may be becoming risks in the software supply chain as developers and data scientists clone models directly into corporate environments, environments that have access to source code, cloud credentials, and internal systems. That situation alone makes a compromised model repository more than a nuisance. The README file for the fake model closely resembled that of the legitimate project, but it departed from the original in that it instructed users to run start.bat on Windows or execute python loader.py on Linux and macOS, instructions central to the infection chain HiddenLayer described. Researchers have previously warned that malicious code can be hidden inside AI model files or related setup scripts on Hugging Face and other public registries. Previous cases involved Pickle-serialised model files that bypassed platform scanners. Malicious loader disguised as setup code HiddenLayer said loader.py began with decoy code that resembled a normal AI model loader, moving quickly to a concealed infection chain. A script disabled SSL verification, decoded a base64-encoded URL linked to jsonkeeper.com, retrieved a remote payload instruction, and passed commands to PowerShell on Windows machines. HiddenLayer said the use of the command-and-control channel jsonkeeper.com allowed the attacker to rotate the payload without changing the repo’s contents. The PowerShell command then downloaded an additional batch file from an attacker-controlled domain, and the malware established persistence by creating a scheduled task designed to resemble a legitimate Microsoft Edge update process. The final payload was a Rust-based infostealer. According to HiddenLayer, it targeted Chromium and Firefox-derived browsers, Discord local storage, cryptocurrency wallets, FileZilla configurations, and host system information. The malware also tried to disable Windows Antimalware Scan Interface and Event Tracing. Wider campaigns HiddenLayer also said it found six further Hugging Face repositories containing virtually identical loader logic that shared infrastructure with the cited attack. The case follows other warnings about malicious AI models on Hugging Face, including poisoned AI SDKs and fake OpenClaw installers. The common thread is that attackers are treating AI development workflows as a route into normally secure environments. AI repositories often contain executable code, setup instructions, dependency files, notebooks, and scripts, and its these peripheral elements that cause the problems, rather than the models themselves. Sakshi Grover, senior research manager for cybersecurity services at IDC, said traditional SCA was designed to inspect dependency manifests, libraries, and container images. It is less effective at identifying malicious loader logic in AI repositories. They also cited IDC’s November 2025 FutureScape report, which contained the call that by 2027, 60% of agentic AI systems should have a bill of materials. This would help companies track which AI artefacts they use, their source, which versions were approved, and whether they contain executable components. Response and mitigation HiddenLayer advised anyone who cloned Open-OSS/privacy-filter and ran start.bat, python loader.py or any file from the repository on a Windows host to treat the system as compromised, and recommends re-imaging systems. Browser sessions should considered compromised even if passwords are not held locally, as session cookies let attackers bypass MFA in some circumstances. Hugging Face has confirmed the repo has been removed. (Image source: Pixabay, under licence .) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media . Explore other upcoming enterprise technology events and webinars here . The post Hugging Face hosted malicious software masquerading as OpenAI release appeared first on AI News .

Key takeaways

  • The vulnerability of open model repositories can compromise security in corporate environments.
  • The artificial inflation of downloads and likes indicates that attackers are becoming more sophisticated.
  • Education on security practices is essential for AI developers and users.

Editorial analysis

The discovery of a malicious repository on Hugging Face masquerading as an OpenAI release raises serious concerns about security in the AI ecosystem, especially at a time when Brazil is increasingly investing in technology and innovation. The incident highlights the vulnerability of open model platforms, where developers and data scientists may inadvertently incorporate malicious code into their projects. This situation is particularly critical in corporate environments, where access to credentials and internal systems can be compromised, resulting in severe consequences for information security.

Moreover, the fact that the malicious repository managed to accumulate 244,000 downloads before being removed suggests that security verification mechanisms on open-source platforms need to be improved. The use of techniques such as artificial inflation of downloads and likes indicates that attackers are becoming more sophisticated in their approaches, which requires a proactive response from the tech community. In Brazil, where AI adoption is on the rise, awareness of these risks must be a priority for companies and developers.

The incident also underscores the importance of educating users about best security practices when using AI models, including verifying the authenticity of repositories and critically analyzing installation scripts. As the use of AI becomes more prevalent, the need for a safe and reliable environment for the development and implementation of AI models becomes increasingly urgent. What we observe is a growing need for collaboration between the developer community, researchers, and hosting platforms to mitigate these risks.

Finally, it is crucial that Brazilian companies adopting AI in their operations are aware of these threats and implement robust security measures. This includes conducting regular security audits on models and repositories used, as well as implementing security policies that protect sensitive data from unauthorized access. The future of AI in Brazil depends not only on innovation but also on the security and trust in the technologies we are developing and using.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

AI News

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process