Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am running Triton Inference Server in a container serving Yolov8 model. The model has been converted to onnx . I am using rtsp stream from Camera. When using python and triton http api (http://:8000), I am able to get the video feed with inferences.
I am trying to use Deepstream container to connect to Triton Inference Server container and use it for Inference. I have been trying to use deepstream-test3 with following config file. The libnvdsinfer_custom_impl_Yolo.so have been compile from the repo (GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models)
After running the it is crashing. Any help will be much appreciated.
python3 deepstream_test_3.py -i rtsp://admin:pass@10.128.4.11:554/Streaming/Channels/101 --pgie nvinferserver -c config_triton.yml
{‘input’: [‘rtsp://admin:pass@10.128.4.11:554/Streaming/Channels/101’], ‘configfile’: ‘config_triton.yml’, ‘pgie’: ‘nvinferserver’, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Is it Integrated GPU? : 0
Creating EGLSink
At least one of the sources is live
WARNING: Overriding infer-config batch-size 0 with number of sources 1
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
0 : rtsp://admin:pass@10.128.4.11:554/Streaming/Channels/101
Starting pipeline
WARNING: infer_proto_utils.cpp:155 auto-update preprocess.normalize.scale_factor to 1.0000
INFO: infer_grpc_backend.cpp:170 TritonGrpcBackend id:1 initialized for model: yolo
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x73ab7ed1f6a0 (GstCapsFeatures at 0x73aa4c0bc920)>
Segmentation fault (core dumped)
infer_config {
unique_id: 1
gpu_ids: 0
max_batch_size: 1
backend {
triton {
model_name: “yolo”
version: -1
grpc {
url: “0.0.0.0:8001”
enable_cuda_buffer_sharing: true
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_LINEAR
tensor_name: “images”
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
symmetric_padding: 1
maintain_aspect_ratio: 1
#normalize {
# scale_factor: 0.0039215697906911373
#channel_offsets: [0.0,0.0,0.0]
#}
}
postprocess {
labelfile_path: "./labels.txt"
detection {
num_detected_classes: 80
custom_parse_bbox_func: "NvDsInferParseYolo"
nms {
confidence_threshold: 0.25
iou_threshold: 0.45
topk: 300
}
}
}
custom_lib {
path: “./libnvdsinfer_custom_impl_Yolo.so”
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
operate_on_gie_id: -1
interval: 0
}