Question re: DeepStream 4.0 caffemodel compiled engine files (running on Jetson Nano)

I’m running the deepstream-test2-app, quite successfully, but because it is pointed to a model file, of course TensorRT has to compile it to a platform-specific engine file, and it takes quite a long time.

The docs lead me to believe that in the config, if I specify “model-engine-file” instead of “model-file” since I already have to compiled engine, I should be able to skip the compilatgion step and just load the model engine directly.

However, no matter what permutations I try, I cannot get it to work.

  1. If I just specify “model-engine-file” in the config, runtime complains it can’t find a model, so it bombs out.
  2. If I include BOTH “model-file” and “model-engine-file”, the latter is ignored, and it re-compiles the engine file anyway.
  3. I’ve also tried setting the gstreamer “nvinfer” plugin’s property “model-engine-file” and it, too is completely ignored.

Is there a sample that shows a working workflow using Deepstream with Gstreamer where the precompiled .engine file is used, instead of the .caffemodel file? Or am I just missing something obvious?



There is no code of configuring ‘model-engine-file’ in deepstream-test2. You may refer to source of deepstream-app and add the code into deepstream-test2.

For more information, it is

if (config->model_engine_file_path) {
  g_object_set (G_OBJECT (bin->primary_gie), "model-engine-file",
      GET_FILE_PATH (config->model_engine_file_path), NULL);



Please confirm the path is precise. Better to use absolute path such as


If you still hit the issue, please share error log for reference.

Error is resolved. Required sudo apt update and install, and reboot… and yes, an absolute path from root helped also, thank you. If anyone has the same issue post to this thread and I’ll do what I can to walk you through it. Cheers