A couple of malicious ML models have been discovered by cybersecurity experts on Hugging Face. These models used an unconventional method involving corrupted pickle files to elude detection.
“Investigators found that the pickle files obtained from the PyTorch archives contained malevolent Python script at the start of the file,” informed Karlo Zanki, a researcher from ReversingLabs, in a report provided to The Hacker News. “
“Investigators found that the pickle files obtained from the PyTorch archives contained malevolent Python script at the start of the file,” informed Karlo Zanki, a researcher from ReversingLabs, in a report provided to The Hacker News. “
