Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version 8.4
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) LD_PRELOAD=/home/user/Neo/deepstream_python_apps/apps/deepstream-occupancy/person-head-detection/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so ./tritonserver --model-repository=/home/user/triton_model_repo --backend-directory=/opt/nvidia/deepstream/deepstream/lib/triton_backends/ --allow-grpc=1
Hi i am trying to triton inference server in grpc mode to access model in deepstream using nvinferserver
.
I tried the above command to run yolov5 model and it was running perfectly, but whenever the detections appears the deepstream pipeline giving error and stopping.
I am getting the below only error even with the debug set to 3, i am running python code here.
0:00:23.925216093 2033209 0xfffee0079240 DEBUG v4l2bufferpool gstv4l2bufferpool.c:2077:gst_v4l2_buffer_pool_process:<nvv4l2decoder0:pool:sink> process buffer 0xfffec80151a8Segmentation fault (core dumped)
Please find the code and model, custom parser
occupancy.py (20.0 KB)
config_nvdsanalytics.txt (3.6 KB)
yolov5_pgie_nvinferserver_grpc_config.txt (2.0 KB)
crowdhuman_yolov5m.cfg (9.2 KB)
crowd_labels.txt (11 Bytes)
nvdsinfer_custom_impl_Yolo.zip (822.9 KB)
crowdhuman_yolov5m.zip (51.1 MB)