Originally published at: https://developer.nvidia.com/blog/defending-ai-model-files-from-unauthorized-access-with-canaries/
As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important. Organizations are designing policies and tools, often as part of data loss prevention and secure supply chain programs, to protect model weights. While security engineering discussions focus on prevention (How do…
We had fun thinking about defender uses of pickle code injection. Throw some other ideas out there!