Explore tweets tagged as #nullifAI
@rst_cloud
RST Cloud
1 year
#threatreport #MediumCompleteness Malicious ML models discovered on Hugging Face platform | 06-02-2025 Source: https://t.co/MMiLeUyWQz Key details below ↓ 💀Threats: Nullifai_technique, Supply_chain_technique, 🎯Victims: Hugging face 🏭Industry: Software_development 🌐Geo:
0
0
0
@ReversingLabs
ReversingLabs
1 year
🤖 Learn about nullifAI & other AI-related threats to software supply chains in our next webinar on 2/20 at 11am ET: https://t.co/Ev1NADqPOH
2
0
3
@rst_cloud
RST Cloud
8 months
#threatreport #LowCompleteness Malicious attack method on hosted ML models now targets PyPI | 23-05-2025 Source: https://t.co/cbvkwF0EtA Key details below ↓ 💀Threats: Supply_chain_technique, Nullifai_technique, 🎯Victims: Developers, Users of alibaba ai labs services
0
0
0
@virusbtn
Virus Bulletin
1 year
Researchers from ReversingLabs recently discovered two Hugging Face models containing malicious code. The nullifAI attack involves abusing Pickle file serialization. https://t.co/LlfGYQ6zFh
0
11
56
@fr0gger_
Thomas Roccia 🤘
1 year
Interesting report from ReversingLabs researchers, who named a new attack nullifAI, a novel malware distribution technique targeting ML models on Hugging Face. 😈Attackers exploited Pickle serialization to deliver payloads undetected. https://t.co/wwGP91Bpcg
3
39
103
@nullifaii
nullifai
21 days
@Jaxweah @grok @grok how likely am I to be picked?
0
0
0
@RoryCrave
Rory J Bernier
1 year
Malicious AI models on Hugging Face exploit a novel attack technique called nulliFai. Learn how they bypass protective measures: https://t.co/y810RQ9QuH
0
0
0
@nullif_ai
nullifai
5 months
@siyan_zhao
Siyan Zhao
5 months
Thanks AK for sharing our work! Unlike autoregressive LLMs, diffusion LLMs can be conditioned on future reasoning hints during generation through inpainting 🧩, enabling guided exploration toward correct solutions. We show that applying inpainting-guided exploration in RL
0
0
0
@nullif_ai
nullifai
5 months
@Sauers_ Gemini praising Codex
0
0
0
@ReversingLabs
ReversingLabs
1 year
⚠️ #ML devs, take note: RL threat researchers have identified nullifAI, a novel attack technique used on ML models hosted on #HuggingFace.
1
4
4
@syedaquib77
Syed Aquib
1 year
Malicious ML Models on Hugging Face Exploit Broken Pickle Format Cybersecurity researchers have found two malicious machine learning models on Hugging Face. These models use "broken" pickle files to evade detection, a technique called "nullifAI". The payload is a platform-aware
0
0
0
@threatlight
threatlight
1 year
New nullifAI technique bypasses Hugging Face protective measures, raising cybersecurity concerns. Learn more:
0
0
0
@foxbook
キタきつね
1 year
オープンソースの AI モデル: 悪意のあるコードや脆弱性のパーフェクトストーム Open Source AI Models: Perfect Storm for Malicious Code, Vulnerabilities #DarkReading (Feb 15) #オープンソース #AIモデル #セキュリティリスク #HuggingFace #NullifAI https://t.co/qBlGeDvbhk
0
0
2
@DCWebGuy
DCWebGuy
1 year
Malicious ML models discovered on Hugging Face platform Software development teams working on machine learning take note: RL threat researchers have identified nullifAI, a novel attack technique used on Hugging Face. https://t.co/XrxESmqlhP
0
0
1
@_Ta_tsu_
TaTsu🙋‍♂️
1 year
Hugging Face|AIモデルの新たな脆弱性「nullifAI」が発見 – 破損Pickleファイルを悪用した検知回避手法とは https://t.co/FQB6Ibc6Wf
0
0
0
@Eth1calHackrZ
Furkan D.
1 year
1/7 🚨 Malicious ML models on Hugging Face exploit "broken" pickles to evade detection! 📛 nullifAI uses reverse shells to connect to hardcoded IPs, threatening supply chain security. 🔒 #Cybersecurity #MLSecurity #HuggingFace 🚀
0
0
0
@innovaTopia_JP
innovaTopia
1 year
Hugging Face|AIモデルの新たな脆弱性「nullifAI」が発見 – 破損Pickleファイルを悪用した検知回避手法とは https://t.co/8Efs2WhDlw
0
1
1
@A_I_News
人工知能・機械学習ニュース [公式]
1 year
Hugging Face|AIモデルの新たな脆弱性「nullifAI」が発見 – 破損Pickleファイルを悪用した検知回避手法とは - innovaTopia
1
0
2
@nullif_ai
nullifai
6 months
GLM 4.5 Air is surprisingly good!
0
0
0