Deepstream app - nvdcf tracker(accuracy)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 4090
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) 546.80
• Issue Type( questions, new requirements, bugs) questions and errors
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) running deepstream-app with given conf file
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am trying to build pipeline with nvdcf tracker in accuracy profile which has reid. I am able to run the app and get the output, but there is no detections and tracking seems in the output file. And having an error related with bus. I am sharing the deepstream-app config file and the terminal output, thanks in advance.

config.txt:
#running tracker from terminal: deepstream-app -c deepstream_app_config.txt

[primary-gie]
enable=1
plugin-type=0
gpu-id=0
batch-size=1
interval=0
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1

Use PeopleNet as PGIE

config-file=config_infer_primary_PeopleNet.txt

Other [primary-gie] configs

[tracker]
enable=1
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_accuracy.yml
gpu-id=0
display-tracking-id=1 #Enables tracking ID display on OSD.
compute-hw=1 #Compute engine to use for scaling. 0 - Default, 1 - GPU, 2 - VIC (Jetson only)

[source0]
enable=1
type=3
uri=file:///test_clips/masked_clip_10m0s_60s.mp4
num-sources=1
gpu-id=0

[sink0]
enable=1
type=3 #4
container=1
codec=1
enc-type=0
bitrate=4000000
sync=0
gpu-id=0
profile=0
nvbuf-memory-type=0
output-file=output.mp4
width=1920 # Add output width
height=1080 # Add output height
source-id=0
##encoder=x264enc

[streammux]
live-source=0
width=1920 # Add output width for the streammux
height=1080 # Add output height for the streammux
batch-size=1
batched-push-timeout=40000
nvbuf-memory-type=0
enable-padding=0

terminal output and error:
GST_DEBUG=1 deepstream-app -c deepstream_app_config_test.txt

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[src/modules/ReID/ReID.cpp, loadTRTEngine() @line 605]: Engine file does not exist
[NvMultiObjectTracker] Load engine failed. Create engine again.
WARNING: [TRT]: onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1215 INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: TensorRT encountered issues when converting weights between types and that could affect accuracy.
WARNING: [TRT]: If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
WARNING: [TRT]: Check verbose logs for the list of affected weights.
WARNING: [TRT]: - 87 weights are affected by this issue: Detected subnormal FP16 values.
WARNING: [TRT]: - 77 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
[NvMultiObjectTracker] Serialized plan file cached at location: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Tracker/resnet50_market1501.etlt_b100_gpu0_fp16.engine
[NvMultiObjectTracker] Initialized
0:01:55.095180430 21188 0x558d9490b3a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60

0:01:55.398767313 21188 0x558d9490b3a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
0:01:55.409622736 21188 0x558d9490b3a0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/fmd/football_analysis/nvdcf_test/config/config_infer_primary_PeopleNet.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running

nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:314>: Received EOS. Exiting …

Quitting
[NvMultiObjectTracker] De-initialized
0:02:09.013624355 21188 0x558d9490b3a0 ERROR GST_BUS gstbus.c:1075:gst_bus_remove_watch: no bus watch was present
App run successful

Can you have a try with this sample for nvtracker accuracy: deepstream_reference_apps/deepstream-tracker-3d at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub ?

Yes its working, i will investigate the issue by taking the given sample as reference, thanks.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.