Python app triton inference Deepstream 5.1

kindly let us know the changes required inferencing of deepstream 5.1 python app to support triton inferencing

i am running sample-app deepstream_ssd_parser.py and i get below error . Could someone assist

ubuntu@ip-172-31-7-39:/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/deepstream-ssd-parser$ python3 deepstream_ssd_parser.py /home/ubuntu/sriharsha/videos/parkinglot.mp4
0:00:00.041186203 20730 0x55fcd4519ef0 WARN ladspa gstladspa.c:507:plugin_init: no LADSPA plugins found, check LADSPA_PATH
0:00:00.056273169 20730 0x55fcd4519ef0 WARN vaapi gstvaapiutils.c:77:gst_vaapi_warning: va_getDriverName() failed with unknown libva error,driver_name=(null)
0:00:00.056639697 20730 0x55fcd4519ef0 WARN vaapi gstvaapiutils.c:77:gst_vaapi_warning: va_getDriverName() failed with unknown libva error,driver_name=(null)
0:00:00.056740847 20730 0x55fcd4519ef0 ERROR default gstvaapi.c:254:plugin_init: Cannot create a VA display
0:00:00.065444965 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxmpeg4videodec’
0:00:00.065464796 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxh264dec’
0:00:00.065481782 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxmpeg4videoenc’
0:00:00.065494406 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxaacenc’
0:00:00.065504311 20730 0x55fcd4519ef0 WARN GST_PLUGIN_LOADING gstplugin.c:527:gst_plugin_register_func: plugin “/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstomx.so” failed to initialise
0:00:00.195909153 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x120000: ‘AVR (Audio Visual Research)’ is not mapped
0:00:00.195933302 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x180000: ‘CAF (Apple Core Audio File)’ is not mapped
0:00:00.195943950 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x100000: ‘HTK (HMM Tool Kit)’ is not mapped
0:00:00.195953080 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0xc0000: ‘MAT4 (GNU Octave 2.0 / Matlab 4.2)’ is not mapped
0:00:00.195964091 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0xd0000: ‘MAT5 (GNU Octave 2.1 / Matlab 5.0)’ is not mapped
0:00:00.195975267 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x210000: ‘MPC (Akai MPC 2k)’ is not mapped
0:00:00.195984037 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0xe0000: ‘PVF (Portable Voice Format)’ is not mapped
0:00:00.195993556 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x160000: ‘SD2 (Sound Designer II)’ is not mapped
0:00:00.196004172 20730 0x55fcd4519ef0 WARN default gstsf.c:98:gst_sf_create_audio_template_caps: format 0x190000: ‘WVE (Psion Series 3)’ is not mapped
0:00:00.200455030 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxmpeg4videodec’
0:00:00.200478040 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxh264dec’
0:00:00.200492498 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxmpeg4videoenc’
0:00:00.200502480 20730 0x55fcd4519ef0 ERROR omx gstomx.c:2769:plugin_init: Core ‘/usr/lib/libomxil-bellagio.so.0’ does not exist for element ‘omxaacenc’
0:00:00.200510236 20730 0x55fcd4519ef0 WARN GST_PLUGIN_LOADING gstplugin.c:527:gst_plugin_register_func: plugin “/usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstomx-generic.so” failed to initialise
0:00:00.307551716 20730 0x55fcd4519ef0 WARN GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:20730): GStreamer-WARNING **: 06:26:15.665: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtritonserver.so: cannot open shared object file: No such file or directory
0:00:00.470863774 20729 0x2124b20 WARN GST_REGISTRY gstregistry.c:1835:gst_update_registry: registry update failed: Error writing registry cache to /home/ubuntu/.cache/gstreamer-1.0/registry.x86_64.bin: Permission denied
Creating Pipeline

Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
0:00:00.507841662 20729 0x2124b20 WARN GST_ELEMENT_FACTORY gstelementfactory.c:456:gst_element_factory_make: no such element factory “nvinferserver”!
Unable to create Nvinferserver
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /home/ubuntu/sriharsha/videos/parkinglot.mp4
Traceback (most recent call last):
File “deepstream_ssd_parser.py”, line 457, in
sys.exit(main(sys.argv))
File “deepstream_ssd_parser.py”, line 380, in main
pgie.set_property(“config-file-path”, “dstest_ssd_nopostprocess.txt”)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’

if you are just running the deepstream sample, i think this is setup issue.
Can you please provide the setup details as required when filing the topic?

its an custom app i am running

i get the below current error i get

I0224 05:03:36.234360 20424 model_repository_manager.cc:1045] loading: Weapon_model:1
I0224 05:03:37.412101 20424 logging.cc:49] [MemUsageChange] Init CUDA: CPU +320, GPU +0, now: CPU 758, GPU 1982 (MiB)
I0224 05:03:37.413235 20424 logging.cc:49] Loaded engine size: 339 MB
I0224 05:03:37.413401 20424 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 758 MiB, GPU 1982 MiB
E0224 05:03:37.414033 20424 logging.cc:43] 1: [stdArchiveReader.cpp::StdArchiveReader::34] Error Code 1: Serialization (Serialization assertion safeVersionRead == safeSerializationVersion failed.Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 96)
E0224 05:03:37.414069 20424 logging.cc:43] 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)
E0224 05:03:37.428310 20424 model_repository_manager.cc:1215] failed to load ‘Weapon_model’ version 1: Internal: unable to create TensorRT engine
ERROR: infer_trtis_server.cpp:1053 Triton: failed to load model Weapon_model, triton_err_str:Invalid argument, err_msg:load failed for model ‘Weapon_model’: version 1: Internal: unable to create TensorRT engine;

ERROR: infer_trtis_backend.cpp:45 failed to load model: Weapon_model, nvinfer error:NVDSINFER_TRITON_ERROR
ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:Weapon_model ready, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.923992553 20424 0x2d26790 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 5]: Error in createNNBackend() <infer_trtis_context.cpp:248> [UID = 5]: failed to initialize triton backend for model:Weapon_model, nvinfer error:NVDSINFER_TRITON_ERROR
I0224 05:03:37.428587 20424 server.cc:234] Waiting for in-flight requests to complete.
I0224 05:03:37.428633 20424 server.cc:249] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
0:00:01.924215946 20424 0x2d26790 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 5]: Error in initialize() <infer_base_context.cpp:81> [UID = 5]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.924276772 20424 0x2d26790 WARN nvinferserver gstnvinferserver_impl.cpp:507:start: error: Failed to initialize InferTrtIsContext
0:00:01.924305827 20424 0x2d26790 WARN nvinferserver gstnvinferserver_impl.cpp:507:start: error: Config file path: config/Weapon/config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt
0:00:01.924927877 20424 0x2d26790 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed
0:00:01.925250630 20424 0x2d26790 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:primary-inference:sink Failed to activate pad
[NvMultiObjectTracker] De-initialized
Warning: gst-library-error-quark: Configuration file batch-size reset to: 1 (5): gstnvinferserver_impl.cpp(284): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(507): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: config/Weapon/config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt
Exiting app

q flushed
Closing RMQ Connection

I mean, can you provde setup info like below?

• Hardware Platform (Jetson / GPU) tesla T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) tesla T4

are you using a TRT engine built from another GPU card?

i generated new engine file with TensortRT 8.0 , and i get below error . Kindly help


In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f96e6497940 (GstCapsFeatures at 0x7f9388002f40)>
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
0:00:15.591888858 31067 0x7f93a8002600 WARN           nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:5
Error: gst-library-error-quark: inference failed with unique-id:5 (1): gstnvinferserver.cpp(519): gst_nvinfer_server_push_buffer (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Exiting app

Traceback (most recent call last):
  File "PoC_Helmet.py", line 165, in stdemux_sink_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
0:00:15.683218885 31067 0x7f93a8002600 WARN           nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:5
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
0:00:15.685387330 31067 0x7f93a8002600 WARN           nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:5
Traceback (most recent call last):
  File "PoC_Helmet.py", line 165, in stdemux_sink_pad_buffer_probe
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
0:00:15.688429317 31067 0x7f93a8002600 WARN           nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary-inference> error: inference failed with unique-id:5
Traceback (most recent call last):
  File "PoC_Helmet.py", line 165, in stdemux_sink_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "PoC_Helmet.py", line 165, in stdemux_sink_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
[NvMultiObjectTracker] De-initialized
**ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32**
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
I0224 14:02:00.484208 31067 model_repository_manager.cc:1078] unloading: Helmet_model:1
I0224 14:02:00.484847 31067 server.cc:234] Waiting for in-flight requests to complete.
I0224 14:02:00.484877 31067 server.cc:249] Timeout 30: Found 1 live models and 0 in-flight non-inference requests
I0224 14:02:00.491651 31067 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 3253, GPU 4319 (MiB)
I0224 14:02:00.504363 31067 model_repository_manager.cc:1195] successfully unloaded 'Helmet_model' version 1
I0224 14:02:01.484980 31067 server.cc:249] Timeout 29: Found 0 live models and 0 in-flight non-inference requests
q flushed
Closing RMQ Connection

below is the config file , i do not want to post process how do i add num_detected_classes and labelfile_path in the preprocess section . could you help the way out?

infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 4
backend {
triton {
#model_name: “ssd_inception_v2_coco_2018_01_28”
model_name: “Helmet_model”
version: -1

  model_repo {
    #root: "../../triton_model_repo"
    root: "../../model_repository"
    log_level: 2
    strict_model_config: true
    tf_gpu_memory_fraction: 0.35
    tf_disable_soft_placement: 0
    # Triton runtime would reserve 64MB pinned memory
    pinned_memory_pool_byte_size: 67108864
    # Triton runtim would reserve 64MB CUDA device memory on GPU 0
    cuda_device_memory { device: 0, memory_pool_byte_size: 67108864 }
  }
}

}
postprocess {
#labelfile_path: “…/…/triton_model_repo/ssd_inception_v2_coco_2018_01_28/labels.txt”
labelfile_path: “…/…/model_repository/Helmet_model/labels.txt”
detection {
num_detected_classes: 2
#custom_parse_bbox_func: “NvDsInferParseCustomTfSSD”

}

}

preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_NONE
maintain_aspect_ratio: 0
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
normalize {
scale_factor: 1.0
channel_offsets: [0, 0, 0]
}
}

extra {
copy_input_to_host_buffers: false
}

custom_lib {
path: “/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so”
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}

output_control {
detect_control {
default_filter { bbox_filter { min_width: 32, min_height: 32 } }
}
}

could someone help with the below error …

l_obj ========, None
[NvMultiObjectTracker] De-initialized
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:1, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:574 Default bbox parsing function support datatypeFP32 only but received output tensor: BatchedNMS with datatype: kInt32
ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes
ERROR: infer_postprocess.cpp:375 detection parsing output tensor data failed, uid:1, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:262 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
q flushed

Hi @h9945394143
About error, especailly error - “ERROR: infer_postprocess.cpp:1044 Failed to parse bboxes”, indicates your can’t use the default post-processor, that is, you need to implement your own post-processor for your model output.

with below config (replacing NvDsInferParseCustomTfSSD with your own post-processor function),
The source code of "NvDsInferParseCustomTfSSD " is under /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app-triton/ .

And, note, if you enable NMS, it does post-processor firstly, and then do NMS.

custom_parse_bbox_func: “NvDsInferParseCustomTfSSD”
...
custom_lib {
    path: “/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so”
}

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.