Different outputs for tlt and trt

Imam training the object detection model on tlt. When I infer using tlt-infer, I find that the accuracy is good. But when I convert it to etlt->trt engine using tlt-converter and use it in deep stream, I detect less objects in deep stream for the same frame. And I tried using etlt file in deep stream, model cannot be converted to engine

• deepstream 5.1 dGPU

Error log:

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: The engine plan file is not compatible with this version of TensorRT, expecting library version 7.2.3 got 7.2.1, please rebuild.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: engine.cpp (1646) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/ds_head/final_model6/model_head_6_fb16.trt
0:00:02.762528681 3228 0x55f6001e1410 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/ds_head/final_model6/model_head_6_fb16.trt failed
0:00:02.762560827 3228 0x55f6001e1410 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/ds_head/final_model6/model_head_6_fb16.trt failed, try rebuild
0:00:02.762574742 3228 0x55f6001e1410 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:10.944775553 3228 0x55f6001e1410 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/ds_head/final_model6/model_head_6.etlt_b1_gpu0_fp32.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 1x34x60

0:00:10.950039891 3228 0x55f6001e1410 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-transfer-learning-app/configs/ds_head/config_infer_head.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready

(deepstream-transfer-learning-app:3228): GLib-GObject-CRITICAL **: 12:23:38.529: g_object_get: assertion ‘G_IS_OBJECT (object)’ failed
** INFO: <bus_callback:167>: Pipeline running

** INFO: <bus_callback:204>: Received EOS. Exiting …

Quitting
App run successful

My files :
Uploading: train.ipynb…
deepstream_app_head.txt (2.7 KB)

config_infer_head.txt (935 Bytes)
capture_rules.csv (110 Bytes)

The engine is already generated.

Please search above error in deepstream forum.

ok why same images give different results for tlt and trt

Please set lower pre-cluster-threshold and retry.

i lowered the threshold to 0.1 but still the outputs are different.not wrong but much less

I find that you save the images in deepstream via “img-save”.
Can you change its threshold and retry? Or is it possible to check via display?

More, what is the resolution for the uri=file:///opt/nvidia/deepstream/deepstream-5.1/samples/streams/tt5.h264 ?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.