I’m running the deepstream-test2-app, quite successfully, but because it is pointed to a model file, of course TensorRT has to compile it to a platform-specific engine file, and it takes quite a long time.
The docs lead me to believe that in the config, if I specify “model-engine-file” instead of “model-file” since I already have to compiled engine, I should be able to skip the compilatgion step and just load the model engine directly.
However, no matter what permutations I try, I cannot get it to work.
- If I just specify “model-engine-file” in the config, runtime complains it can’t find a model, so it bombs out.
- If I include BOTH “model-file” and “model-engine-file”, the latter is ignored, and it re-compiles the engine file anyway.
- I’ve also tried setting the gstreamer “nvinfer” plugin’s property “model-engine-file” and it, too is completely ignored.
Is there a sample that shows a working workflow using Deepstream with Gstreamer where the precompiled .engine file is used, instead of the .caffemodel file? Or am I just missing something obvious?
TIA
-WG