Can't run deepstream-app-5.1 example using deepstream docker 5.1, but can do it with deepstream docker 5.0.1

Hello everyone.
I’m vaving a hard time figuring this out.

I have two deepstream containers.
nvcr.io/nvidia/deepstream:5.0.1-20.09-devel
nvcr.io/nvidia/deepstream:5.1-21.02-devel

and I execute both using this command

docker run --gpus all -it --rm -v /home/user:/home/user -p 8554:8554 -p 8555:8555 -p 5400:5400 -p 5401:5401 -p 554:554 -w /root nvcr.io/nvidia/deepstream:5.x-x-devel

When I’m on the 5.1 docker I can’t run deepstream-app example (5.7 KB) (with minor modifications, just source and use symbolic link), the output I get is:

(gst-plugin-scanner:13): GStreamer-WARNING **: 20:02:02.455: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: coreReadArchive.cpp (41) - Serialization Error in verifyHeader: 0 (Version tag does not match. Note: Current Version: 96, Serialized Engine Version: 89)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine [source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt|attachment](upload://b4eepeyeJaaJE8uFHnbOQLvfzkS.txt) (5.7 KB) failed from file: /home/user/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:18.394225569    12 0x5585e237aa90 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 6]: deserialize engine from file :/home/user/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:18.394271027    12 0x5585e237aa90 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 6]: deserialize backend context from engine from file :/home/user/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:18.394286793    12 0x5585e237aa90 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
Illegal instruction (core dumped)

but if I copy the whole deepstream5.1 folder to the /home/user folder and then run the 5.0 container, I can run deepstream-app with the same config file.

Considering this is an official Docker container, I would expect to be able to run the example with no difficulties.

I am running the container on Ubuntu 18.04 LTS,
With an nvidia GPU gtx 2600

Thank you in advance!


More Info (From the comments)

It has came to my attention that when running the 5.0.1 container, if the model is not found, deepstream-app will try to generate it from the caffemodel, but the 5.1 container fails to do the same,

I removed all the .engine files and ran deepstream-app on 5.1 to check if the first error line on the output I posted was doe to having generated the .engine files with 5.0.1 container.
The output I got is this one

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:02.866306939    60 0x7fcfac002290 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 6]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:02.866358923    60 0x7fcfac002290 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 6]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:02.866376185    60 0x7fcfac002290 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
Illegal instruction (core dumped)
root@9aa0969e8680:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app# exit
exit

If I switch to 5.0.1 container, the .engine files are generated soccessfully and the app runs with no problem., the output is:

(gst-plugin-scanner:12): GStreamer-WARNING **: 13:06:56.125: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:02.335304599    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 6]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:02.335348307    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 6]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:02.335366273    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:00:40.091499255    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 6]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 20x1x1

0:00:40.100451406    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:40.100767918    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 5]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:40.100790582    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 5]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:40.100815071    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 5]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:01:14.775056329    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 5]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 12x1x1

0:01:14.781096399    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:01:14.781434127    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 4]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:01:14.781453202    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 4]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:01:14.781473733    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:01:48.664675669    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 4]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 6x1x1

0:01:48.670706741    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine open error
0:01:48.710727432    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed
0:01:48.710749153    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed, try rebuild
0:01:48.710762259    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:07.942582373    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:02:07.947109209    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)    FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF:  0.00 (0.00)    0.00 (0.00)     0.00 (0.00)     0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
** INFO: <bus_callback:167>: Pipeline running

KLT Tracker Init
**PERF:  41.46 (41.45)  41.67 (41.67)   41.71 (41.71)   41.67 (41.67)
**PERF:  41.42 (41.42)  41.42 (41.52)   41.42 (41.54)   41.42 (41.52)
**PERF:  40.97 (41.28)  40.97 (41.34)   40.97 (41.35)   40.97 (41.34)
**PERF:  41.60 (41.36)  41.60 (41.41)   41.60 (41.42)   41.60 (41.41)
**PERF:  41.49 (41.36)  41.49 (41.40)   41.49 (41.41)   41.49 (41.40)
**PERF:  41.95 (41.44)  41.95 (41.47)   41.95 (41.48)   41.95 (41.47)
**PERF:  41.54 (41.49)  41.54 (41.52)   41.54 (41.52)   41.54 (41.52)
** INFO: <bus_callback:204>: Received EOS. Exiting ...

Quitting
App run successful

Both outputs have the same warning about the calibration caché, so I figure that is not the issue.

So my question would be…
How do I debug the Illegal instruction that dumps the core?

Thank you in advance.

1 Like

I can run deepstream-app with the config file you post in deepstream:5.1-21.02-devel container. Please check Docker Containers — DeepStream 6.1.1 Release documentation

Thank you Fiona.
I have checked the documentation you provide, and the container I am running is the same as yours.

Do your deepstream-app application builds the net?
It has came to my attention that when running the 5.0.1 container, if the model is not found, deepstream-app will try to generate it from the caffemodel, but the 5.1 container fails to do the same,

I removed all the .engine files and ran deepstream-app on 5.1 to check if the first error line on the output I posted was doe to having generated the .engine files with 5.0.1 container.
The output I got is this one

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:02.866306939    60 0x7fcfac002290 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 6]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:02.866358923    60 0x7fcfac002290 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 6]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:02.866376185    60 0x7fcfac002290 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
Illegal instruction (core dumped)
root@9aa0969e8680:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app# exit
exit

If I switch to 5.0.1 container, the .engine files are generated soccessfully and the app runs with no problem., the output is:

(gst-plugin-scanner:12): GStreamer-WARNING **: 13:06:56.125: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:02.335304599    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 6]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:02.335348307    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 6]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:02.335366273    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:00:40.091499255    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 6]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 20x1x1

0:00:40.100451406    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:40.100767918    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 5]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:40.100790582    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 5]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:40.100815071    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 5]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:01:14.775056329    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 5]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 12x1x1

0:01:14.781096399    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:01:14.781434127    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 4]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:01:14.781453202    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 4]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:01:14.781473733    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:01:48.664675669    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 4]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224
1   OUTPUT kFLOAT predictions/Softmax 6x1x1

0:01:48.670706741    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine open error
0:01:48.710727432    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed
0:01:48.710749153    11 0x56370fd0e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine failed, try rebuild
0:01:48.710762259    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:07.942582373    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /home/telconet/dev/nvidia/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:02:07.947109209    11 0x56370fd0e460 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/telconet/dev/nvidia/deepstream-5.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)    FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF:  0.00 (0.00)    0.00 (0.00)     0.00 (0.00)     0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

KLT Tracker Init
KLT Tracker Init
KLT Tracker Init
** INFO: <bus_callback:167>: Pipeline running

KLT Tracker Init
**PERF:  41.46 (41.45)  41.67 (41.67)   41.71 (41.71)   41.67 (41.67)
**PERF:  41.42 (41.42)  41.42 (41.52)   41.42 (41.54)   41.42 (41.52)
**PERF:  40.97 (41.28)  40.97 (41.34)   40.97 (41.35)   40.97 (41.34)
**PERF:  41.60 (41.36)  41.60 (41.41)   41.60 (41.42)   41.60 (41.41)
**PERF:  41.49 (41.36)  41.49 (41.40)   41.49 (41.41)   41.49 (41.40)
**PERF:  41.95 (41.44)  41.95 (41.47)   41.95 (41.48)   41.95 (41.47)
**PERF:  41.54 (41.49)  41.54 (41.52)   41.54 (41.52)   41.54 (41.52)
** INFO: <bus_callback:204>: Received EOS. Exiting ...

Quitting
App run successful

Both outputs have the same warning about the calibration caché, so I figure that is not the issue.

So my question would be…
How do I debug the Illegal instruction that dumps the core?

Thank you in advance.

Have you updated your driver? NVIDIA driver 460.32 or higher is needed. Quickstart Guide — DeepStream 6.1.1 Release documentation

And please check your software and platform compatibility according to Quickstart Guide — DeepStream 6.1.1 Release documentation