![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQWcYcAOTlIkcZh42hOo7tpHqw-bfgH-fI8AVtVlBvdAxG9T6sk7yiPcqLiF12fI1NFDSFs_E1DnXgFATAyWWrdnOapLOkunaH-MV9REbP5vKDzL5Vknra2-icIkjfR-jB-FdOieydkQK-dUrCpXcT7y6vT0n-MFfQzVLtEdhPvdJwkdT0RZ1wEJFG0gmo/s1600/ai.png)
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers’ models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines.
“Malicious models represent a major risk to AI systems,
Source:: The Hackers News