Hi @muhammadrizwanmunawar ,
We don’t have the sample for now.
But I think it’s not difficult to implement it by yourself.
Here is the rough idea.
- you give a TensorRT engine file encrypted with watermark or other encryption change to nvinfer with “model-engine-file” properity
- In nvinfer code - /opt/nvidia/deepstream/deepstream-6.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp
/* Deserialize engine from file */
std::unique_ptr<TrtEngine>
TrtModelBuilder::deserializeEngine(const std::string& path, int dla)
{
std::ifstream fileIn(path, std::ios::binary);
if (!fileIn.is_open())
{
dsInferError(
"Deserialize engine failed because file path: %s open error",
safeStr(path));
return nullptr;
}
.... // **---> after file is opened, you can add your own code to remove the watermark or do some things others to 'decrypt' to TensorRT engine**
UniquePtrWDestroy<nvinfer1::ICudaEngine> engine =
runtime->deserializeCudaEngine(data.data(), size, factory); **// ---> TRT API to deserialize the decrypted TRT engine**
...
}