Getting Segmentation fault (Core Dumped) error in Multimodel integration

Please provide complete information as applicable to your setup.

Hardware Platform (GPU)
• DeepStream Version : 5.1
• NVIDIA GPU Driver Version (valid for GPU only)
**• NVIDIA GPU Driver Version (valid for GPU only): Driver Version: 460.39

Hello,

We are trying to implement the multi model integration using deepstream-python-apps in which we are using trafficcamNet as primary detector and vehicletypeNet as a secondary classifer in which we are getting segamentation fault core )(dumped error).

Error Logs:
pr_sec.py:6: PyGIWarning: GstRtspServer was imported without specifying a version first. Use gi.require_version(‘GstRtspServer’, ‘1.0’) before import to ensure that the right version gets loaded.
from gi.repository import GObject, Gst, GstRtspServer
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

Creating nvtracker

Creating secondary detector

Creating nvdsanalytics

Creating tiler

Creating nvvidconv

Creating nvosd

Creating H264 Encoder
Creating H264 rtppay
Atleast one of the sources is live
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8661/ds-test ***

Now playing…
1 : rtsp://10.60.33.147:554/stream2
Starting pipeline

0:00:01.625061378 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:01.625115289 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat YM12
0:00:01.625192123 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:01.625208945 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat YM12
0:00:01.625244371 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2375:gst_v4l2_object_add_interlace_mode:0x308acc0 Failed to determine interlace mode
0:00:01.625294515 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:01.625311126 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat NM12
0:00:01.625325834 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:01.625341233 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat NM12
0:00:01.625357403 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2375:gst_v4l2_object_add_interlace_mode:0x308acc0 Failed to determine interlace mode
0:00:01.625435559 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:01.625452381 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe minimum capture size for pixelformat H264
0:00:01.625466267 1999 0x30a3640 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:01.625487647 1999 0x30a3640 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe maximum capture size for pixelformat H264
0:00:02.031898737 1999 0x30a3640 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 4]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 1 output network tensors.
0:00:31.526787823 1999 0x30a3640 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 4]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 6x1x1

0:00:31.542568805 1999 0x30a3640 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 4]: Load new model:dstest2_sgie3_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF][Warning] minTrackingConfidenceDuringInactive is deprecated
[NvDCF] Initialized
0:00:31.825536117 1999 0x30a3640 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:51.267156216 1999 0x30a3640 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/dswpark/pri_secondary/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_fp16.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:51.278325140 1999 0x30a3640 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dsnvanalytics_pgie_config_vehicle.txt sucessfully
Decodebin child added: source

0:00:51.308486207 1999 0x24aa540 FIXME default gstutils.c:3981:gst_pad_create_stream_id_internal:fakesrc0:src Creating random stream-id, consider implementing a deterministic way of creating a stream-id
Decodebin child added: decodebin0

Decodebin child added: rtph264depay0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

0:00:51.539527228 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.539560851 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat MJPG
0:00:51.539577703 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.539595747 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat MJPG
0:00:51.539644097 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.539669024 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat MPG4
0:00:51.539887133 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.539904225 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat MPG4
0:00:51.539932087 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.539950291 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat MPG2
0:00:51.540007208 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540024491 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat MPG2
0:00:51.540078562 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540114589 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat H265
0:00:51.540125720 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540169042 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat H265
0:00:51.540189490 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540202484 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat VP90
0:00:51.540216010 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540233312 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat VP90
0:00:51.540251205 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540264040 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat VP80
0:00:51.540275531 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540288415 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat VP80
0:00:51.540308834 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540322118 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe minimum capture size for pixelformat H264
0:00:51.540335944 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:sink Unable to try format: Unknown error -1
0:00:51.540346615 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:sink Could not probe maximum capture size for pixelformat H264
0:00:51.540411797 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:src Unable to try format: Unknown error -1
0:00:51.540425673 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2924:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:src Could not probe minimum capture size for pixelformat NM12
0:00:51.540436203 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:3038:gst_v4l2_object_get_nearest_size:nvv4l2decoder0:src Unable to try format: Unknown error -1
0:00:51.540446582 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2930:gst_v4l2_object_probe_caps_for_format:nvv4l2decoder0:src Could not probe maximum capture size for pixelformat NM12
0:00:51.540503579 1999 0x24aaf70 WARN v4l2 gstv4l2object.c:2375:gst_v4l2_object_add_interlace_mode:0x7fd4680b0470 Failed to determine interlace mode
0:00:51.660799023 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.660844849 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.660899141 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.660913127 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.660957170 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.660968481 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661017754 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661029235 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661067607 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661078808 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661118893 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661134542 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661178054 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661194425 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661235802 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

0:00:51.661249889 1999 0x24aaf70 ERROR v4l2 gstv4l2object.c:2077:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from v4l-utils.git - media (V4L2, DVB and IR) applications and libraries

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7fd61eb14be8 (GstCapsFeatures at 0x7fd4680adb40)>
0:00:51.666281664 1999 0x24aaf70 FIXME basesink gstbasesink.c:3145:gst_base_sink_default_event: stream-start event without group-id. Consider implementing group-id handling in the upstream elements
0:00:51.719763091 1999 0x24aaf70 WARN v4l2bufferpool gstv4l2bufferpool.c:1066:gst_v4l2_buffer_pool_start:encoder:pool:src Uncertain or not enough buffers, enabling copy threshold
0:00:51.722712912 1999 0x24aaf70 WARN v4l2videodec gstv4l2videodec.c:1673:gst_v4l2_video_dec_decide_allocation: Duration invalid, not setting latency
0:00:51.722759900 1999 0x24aaf70 WARN v4l2bufferpool gstv4l2bufferpool.c:1066:gst_v4l2_buffer_pool_start:nvv4l2decoder0:pool:src Uncertain or not enough buffers, enabling copy threshold
0:00:52.711487163 1999 0x7fd4680d1cf0 WARN v4l2bufferpool gstv4l2bufferpool.c:1513:gst_v4l2_buffer_pool_dqbuf:nvv4l2decoder0:pool:src Driver should never set v4l2_buffer.field to ANY
##################################################
Frame Number= 0 stream id= 0 Number of Objects= 0 CARS= 0
##################################################
0:00:52.840566068 1999 0x24a9720 WARN v4l2bufferpool gstv4l2bufferpool.c:1513:gst_v4l2_buffer_pool_dqbuf:encoder:pool:src Driver should never set v4l2_buffer.field to ANY
##################################################
Frame Number= 1 stream id= 0 Number of Objects= 0 CARS= 0
##################################################
##################################################
Frame Number= 2 stream id= 0 Number of Objects= 0 CARS= 0
##################################################
##################################################
Frame Number= 3 stream id= 0 Number of Objects= 1 CARS= 1
##################################################
Fatal Python error: Segmentation fault

Thread 0x00007fd6c82c7740 (most recent call first):
File “/usr/lib/python3/dist-packages/gi/overrides/GLib.py”, line 585 in run
File “pr_sec.py”, line 661 in main
File “pr_sec.py”, line 669 in
Segmentation fault (core dumped)

Output of Nvidia-smi:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.39 Driver Version: 460.39 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:27:00.0 Off | 0 |
| N/A 48C P0 28W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 Tesla T4 Off | 00000000:83:00.0 Off | 0 |
| N/A 53C P0 29W / 70W | 310MiB / 15109MiB | 8% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 Tesla T4 Off | 00000000:A3:00.0 Off | 0 |
| N/A 72C P0 45W / 70W | 2669MiB / 15109MiB | 37% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 Tesla T4 Off | 00000000:C3:00.0 Off | 0 |
| N/A 54C P0 29W / 70W | 0MiB / 15109MiB | 6% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
±----------------------------------------------------------------------------+

Docker command for creating a container:
docker run -it -d --name -p p1:p2 --gpus all bash

Logging into already created container:
docker exec -it bash

To run deepstream file:
python3

Here is the python file:

import sys
sys.path.append(’…/’)
import gi
import configparser
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst, GstRtspServer
from gi.repository import GLib
from ctypes import *
import time
import sys
import math
import platform
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
from common.FPS import GETFPS
import numpy as np
import pyds
import datetime
from timeit import time
from datetime import datetime
import cv2
import faulthandler
faulthandler.enable()

codec=“H264”
bitrate=4000000
fps_streams={}

MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_CARS = 0
PGIE_CLASS_ID_PEOPLE = 1
PGIE_CLASS_ID_ROADSIGN = 2
PGIE_CLASS_ID_TWO_WHEELERS = 3
MUXER_OUTPUT_WIDTH=1920
MUXER_OUTPUT_HEIGHT=1080
MUXER_BATCH_TIMEOUT_USEC=4000000
TILED_OUTPUT_WIDTH=1280
TILED_OUTPUT_HEIGHT=720
GST_CAPS_FEATURES_NVMM=“memory:NVMM”
OSD_PROCESS_MODE= 0
OSD_DISPLAY_TEXT= 1

pgie_classes_str= [“cars”,“people”,“road signs”, “two-wheelers”]
thres_time = 20

for chcking the id with time

first_appearence_dict = {}

for checking the postion

dict_pos = {}

nvanlytics_src_pad_buffer_probe will extract metadata received on nvtiler sink pad

and update params for drawing rectangle, object information etc.

def nvanalytics_src_pad_buffer_probe(pad,info,u_data):
frame_number=0
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return

# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
#dict_pos = {}
while l_frame:
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garTWO_WHEELERSe collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break
    #old_time = datetime.datetime.now()
   # thres_time = 0.005
  #  first_appearence_dict  = {}
    """DS PROBE LOGIC"""
   # if obj_meta.object_id in first_appearence_dict.keys():
    #    time_diff = first_appearence_dict[obj_meta.object_id] - datetime.now()
     #   if time_diff > thres_time:
      #      print("RAISE FLAG")
       # else:
        #    first_appearence_dict[obj_meta.object_id]  =  datetime.now()



    frame_number=frame_meta.frame_num
    l_obj=frame_meta.obj_meta_list
    num_rects = frame_meta.num_obj_meta
    obj_counter = {
    PGIE_CLASS_ID_CARS:0,
    PGIE_CLASS_ID_PEOPLE:0,
    PGIE_CLASS_ID_ROADSIGN:0,
    PGIE_CLASS_ID_TWO_WHEELERS:0

    }
    print("#"*50)
    counter = 0
    vehicle_meta_status= "" #entry/exit status
    #vehicle_id_list = [""]
  #  dict_pos={}

    while l_obj:
     #   dict_pos={}
        try:
            # Note that l_obj.data needs a cast to pyds.NvDsObjectMeta
            # The casting is done by pyds.NvDsObjectMeta.cast()
            obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break
        obj_counter[obj_meta.class_id] += 1
        l_user_meta = obj_meta.obj_user_meta_list

        #vehicle_id_list = [""]        
        # Extract object level meta data from NvDsAnalyticsObjInfo
        while l_user_meta:
            try:
                user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
                if user_meta.base_meta.meta_type == pyds.nvds_get_user_meta_type("NVIDIA.DSANALYTICSOBJ.USER_META"):
                    user_meta_data = pyds.NvDsAnalyticsObjInfo.cast(user_meta.user_meta_data)
                    #if user_meta_data.dirStatus: print("Object {0} moving in direction: {1}".format(obj_meta.object_id, user_meta_data.dirStatus))
                    if user_meta_data.lcStatus: 
                       # print("=================Object {0} line crossing status: {1}".format(obj_meta.object_id, user_meta_data.lcStatus))
                       pass
                    if user_meta_data.ocStatus: print("Object {0} overcrowding status: {1}".format(obj_meta.object_id, user_meta_data.ocStatus))
                   # if user_meta_data.roiStatus: print("Object {0} roi status: {1}".format(obj_meta.object_id, user_meta_data.roiStatus))
                    if user_meta_data.roiStatus: print("Object {0} roi ROI: {1}".format(obj_meta.object_id, user_meta_data.roiStatus))
                    roi_status= user_meta_data.roiStatus
                    #print("HEYYYYYYYYY**********user_meta_data.roiStatus",user_meta_data.roiStatus)
                    vehicle_id = obj_meta.object_id
                    #dict_pos = {}
                    '''
                    if obj_meta.object_id not in first_appearence_dict.keys():
                         #dict_pos={}
                         first_appearence_dict[obj_meta.object_id]=datetime.now()
                         dict_pos[obj_meta.object_id]=obj_meta.rect_params.top
                         print(first_appearence_dict)
                         print("oldtop",obj_meta.rect_params.top)
                    else:
                        #print(first_appearence_dict[obj_meta.object_id])
                        #print(datetime.now())              
                        print("newtop",obj_meta.rect_params.top)
                        print("oldtop",dict_pos[obj_meta.object_id])
                        posdiff = obj_meta.rect_params.top - dict_pos[obj_meta.object_id]
                       # print("newtop",obj_meta.rect_params.top)
                       # print("oldtop",dict_pos[obj_meta.object_id])
                        timediff = datetime.now() - first_appearence_dict[obj_meta.object_id]

timediff1 = (timediff.hour3600)+(timediff.minute60)+timediff.second

                        timediff1 = timediff.seconds
                        #print(timediff)
                      #  print(timediff1)
                        print(posdiff)
                        if posdiff < 1 and timediff1 > thres_time:
                            counter +=1
                            #print("Wrongly Parked")
                            text = "{}|{}secs.".format('wrongly parked', str(np.round(timediff1,1)))
                            vehicle_meta_status = "Wrongly Parked"
                           # cv2.putText(frame_meta, text,(int(bbox[0]-50), int(bbox[1] -40)),0, 5e-3 * 100, (0,0,255),2)
                            #cv2.putText(frame_meta.base_meta, text,0, 5e-3 * 100, (0,0,255),2)
                     '''

                    #if obj_meta.object_id in first_appearence_dict.keys():
                    #    time_diff = first_appearence_dict[obj_meta.object_id] - datetime.now()
                    #    print(time_diff)
                    #    if time_diff > thres_time:
                    #        print("time_diff")
                    #        print("WRONGLY PARKED")
                    #else:
                  #      first_appearence_dict[obj_meta.object_id]  =  datetime.now()
                  #      print(first_appearence_dict)

                
                    
                 
                 # if time_diff_final>.005: 
                            #print("Wrongly Parked")
                  #          vehicle_meta_status = "Wrongly Parked"
                   #     else:   print('Correct!')
            
                    
            except StopIteration:
                break

            try:
                l_user_meta = l_user_meta.next
            except StopIteration:
                break
        try:
            l_obj=l_obj.next
        except StopIteration:
            break

print(’=========object metadat***====’,obj_meta.text_params.display_text)

    # Get meta data from NvDsAnalyticsFrameMeta
    l_user = frame_meta.frame_user_meta_list
    while l_user:
        try:
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            if user_meta.base_meta.meta_type == pyds.nvds_get_user_meta_type("NVIDIA.DSANALYTICSFRAME.USER_META"):
                user_meta_data = pyds.NvDsAnalyticsFrameMeta.cast(user_meta.user_meta_data)
                if user_meta_data.objInROIcnt: print("Objs in ROI: {0}".format(user_meta_data.objInROIcnt))

                if user_meta_data.objLCCumCnt: 
                    print("Linecrossing Cumulative: {0}".format(user_meta_data.objLCCumCnt))
                    user_meta_status=  str(user_meta_data.objLCCumCnt)

                if user_meta_data.objLCCurrCnt: 
                    print("Linecrossing Current Frame: {0}".format(user_meta_data.objLCCurrCnt))

                if user_meta_data.ocStatus: print("Overcrowding status: {0}".format(user_meta_data.ocStatus))
        except StopIteration:
            break
        try:
            l_user = l_user.next
        except StopIteration:
            break

#print(“Frame Number=”, frame_number, “stream id=”, frame_meta.pad_index, “Number of Objects=”,num_rects,“Vehicle_count=”,obj_counter[PGIE_CLASS_ID_VEHICLE],“CARS_count=”,obj_counter[PGIE_CLASS_ID_CARS])
print(“Frame Number=”, frame_number, “stream id=”, frame_meta.pad_index, “Number of Objects=”,num_rects,“CARS=”,obj_counter[PGIE_CLASS_ID_CARS])
# Get frame rate through this probe
fps_streams[“stream{0}”.format(frame_meta.pad_index)].get_fps()
#for displaying frames on OSD

    display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    display_meta.num_labels = 1
    py_nvosd_text_params = display_meta.text_params[0]
    

   # text = "{}|{}secs.".format('wrongly parked', str(np.round(time1,1)))

    
   # py_nvosd_text_params.display_text = "Timer = {}".format(text)
    py_nvosd_text_params.display_text = "Frame Number={} Parking Status:{} Count of Wrongly Parked Vehicles:{}".format(frame_number,vehicle_meta_status,counter)
    py_nvosd_text_params.x_offset = 70;
    py_nvosd_text_params.y_offset = 16;
    py_nvosd_text_params.font_params.font_name = "Serif"
    py_nvosd_text_params.font_params.font_size = 14
    py_nvosd_text_params.font_params.font_color.red = 1.0
    py_nvosd_text_params.font_params.font_color.green = 1.0
    py_nvosd_text_params.font_params.font_color.blue = 1.0
    py_nvosd_text_params.font_params.font_color.alpha = 1.0
    py_nvosd_text_params.set_bg_clr = 1
    py_nvosd_text_params.text_bg_clr.red = 0.0
    py_nvosd_text_params.text_bg_clr.green = 0.0
    py_nvosd_text_params.text_bg_clr.blue = 0.0
    py_nvosd_text_params.text_bg_clr.alpha = 1.0
    #print("============================DISPLAY META===========================",display_meta) 
    #for i in display_meta.items():
    #    print(i)
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
    
    
    
    #py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} CARS_count={}". \
    #        format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_CARS])
    try:
        l_frame=l_frame.next
    except StopIteration:
        break
    print("#"*50)

return Gst.PadProbeReturn.OK

def cb_newpad(decodebin, decoder_src_pad,data):
print(“In cb_newpad\n”)
caps=decoder_src_pad.get_current_caps()
gststruct=caps.get_structure(0)
gstname=gststruct.get_name()
source_bin=data
features=caps.get_features(0)

# Need to check if the pad created by the decodebin is for video and not
# audio.
print("gstname=",gstname)
if(gstname.find("video")!=-1):
    # Link the decodebin pad only if decodebin has picked nvidia
    # decoder plugin nvdec_*. We do this by checking if the pad caps contain
    # NVMM memory features.
    print("features=",features)
    if features.contains("memory:NVMM"):
        # Get the source bin ghost pad
        bin_ghost_pad=source_bin.get_static_pad("src")
        if not bin_ghost_pad.set_target(decoder_src_pad):
            sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
    else:
        sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy,Object,name,user_data):
print(“Decodebin child added:”, name, “\n”)
if(name.find(“decodebin”) != -1):
Object.connect(“child-added”,decodebin_child_added,user_data)
if(is_aarch64() and name.find(“nvv4l2decoder”) != -1):
print(“Seting bufapi_version\n”)
Object.set_property(“bufapi-version”,True)

def create_source_bin(index,uri):
print(“Creating source bin”)

# Create a source GstBin to abstract this bin's content from the rest of the
# pipeline
bin_name="source-bin-%02d" %index
print(bin_name)
nbin=Gst.Bin.new(bin_name)
if not nbin:
    sys.stderr.write(" Unable to create source bin \n")

# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
if not uri_decode_bin:
    sys.stderr.write(" Unable to create uri decode bin \n")
# We set the input uri to the source element
uri_decode_bin.set_property("uri",uri)
# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has beed created by the decodebin
uri_decode_bin.connect("pad-added",cb_newpad,nbin)
uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

# We need to create a ghost pad for the source bin which will act as a proxy
# for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the
# cb_newpad callback, we will set the ghost pad target to the video decoder
# src pad.
Gst.Bin.add(nbin,uri_decode_bin)
bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
if not bin_pad:
    sys.stderr.write(" Failed to add ghost pad in source bin \n")
    return None
return nbin

def main(args):
# Check input arguments
if len(args) < 2:
sys.stderr.write(“usage: %s [uri2] … [uriN]\n” % args[0])
sys.exit(1)

for i in range(0,len(args)-1):
    fps_streams["stream{0}".format(i)]=GETFPS(i)
number_sources=len(args)-1

# Standard GStreamer initialization
GObject.threads_init()
Gst.init(None)

# Create gstreamer elements */
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
is_live = False

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")
print("Creating streamux \n ")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

pipeline.add(streammux)
for i in range(number_sources):
    print("Creating source_bin ",i," \n ")
    uri_name=args[i+1]
    if uri_name.find("rtsp://") == 0 :
        is_live = True
    source_bin=create_source_bin(i, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    padname="sink_%u" %i
    sinkpad= streammux.get_request_pad(padname)
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad=source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)
queue1=Gst.ElementFactory.make("queue","queue1")
queue2=Gst.ElementFactory.make("queue","queue2")
queue3=Gst.ElementFactory.make("queue","queue3")
queue4=Gst.ElementFactory.make("queue","queue4")
queue5=Gst.ElementFactory.make("queue","queue5")
queue6=Gst.ElementFactory.make("queue","queue6")
queue7=Gst.ElementFactory.make("queue","queue7")
pipeline.add(queue1)
pipeline.add(queue2)
pipeline.add(queue3)
pipeline.add(queue4)
pipeline.add(queue5)
pipeline.add(queue6)
pipeline.add(queue7)

print("Creating Pgie \n ")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

print("Creating nvtracker \n ")
tracker = Gst.ElementFactory.make("nvtracker", "tracker")
if not tracker:
    sys.stderr.write(" Unable to create tracker \n")

print("Creating secondary detector \n ")
sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
if not sgie3:
    sys.stderr.write(" Unable to make sgie3 \n")

print("Creating nvdsanalytics \n ")
nvanalytics = Gst.ElementFactory.make("nvdsanalytics", "analytics")
if not nvanalytics:
    sys.stderr.write(" Unable to create nvanalytics \n")
nvanalytics.set_property("config-file", "config_nvdsanalytics.txt")

print("Creating tiler \n ")
tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
if not tiler:
    sys.stderr.write(" Unable to create tiler \n")

print("Creating nvvidconv \n ")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
    sys.stderr.write(" Unable to create nvvidconv \n")

print("Creating nvosd \n ")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")
nvosd.set_property('process-mode',OSD_PROCESS_MODE)
nvosd.set_property('display-text',OSD_DISPLAY_TEXT)
#nvosd.set_property('display_text',0)

if(is_aarch64()):
    print("Creating transform \n ")
    transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
    if not transform:
        sys.stderr.write(" Unable to create transform \n")


    # Create OSD to draw on the converted RGBA buffer
'''
#nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")
'''
nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
if not nvvidconv_postosd:
    sys.stderr.write(" Unable to create nvvidconv_postosd \n")

# Create a caps filter
caps = Gst.ElementFactory.make("capsfilter", "filter")
caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))

# Make the encoder
if codec == "H264":
    encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
    print("Creating H264 Encoder")
elif codec == "H265":
    encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
    print("Creating H265 Encoder")
if not encoder:
    sys.stderr.write(" Unable to create encoder")
encoder.set_property('bitrate', bitrate)
if is_aarch64():
    encoder.set_property('preset-level', 1)
    encoder.set_property('insert-sps-pps', 1)
    encoder.set_property('bufapi-version', 1)

# Make the payload-encode video into RTP packets
if codec == "H264":
    rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
    print("Creating H264 rtppay")
elif codec == "H265":
    rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
    print("Creating H265 rtppay")
if not rtppay:
    sys.stderr.write(" Unable to create rtppay")


#print("Creating EGLSink \n")
#sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
#sink = Gst.ElementFactory.make("fakesink", "fakesink")
#if not sink:
#    sys.stderr.write(" Unable to create egl sink \n")



    # Make the UDP sink
updsink_port_num = 250
sink = Gst.ElementFactory.make("udpsink", "udpsink")
if not sink:
    sys.stderr.write(" Unable to create udpsink")

sink.set_property('host', '224.224.255.255')
sink.set_property('port', updsink_port_num)
sink.set_property('async', False)
sink.set_property('sync', 1)



if is_live:
    print("Atleast one of the sources is live")
    streammux.set_property('live-source', 1)

streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', number_sources)
streammux.set_property('batched-push-timeout', 4000000)
#pgie.set_property('config-file-path', "dsnvanalytics_pgie_config.txt") 
pgie.set_property('config-file-path', "dsnvanalytics_pgie_config_vehicle.txt")
####Set properties of  sgie
sgie3.set_property('config-file-path', "dstest2_sgie3_config.txt")

pgie_batch_size=pgie.get_property("batch-size")
if(pgie_batch_size != number_sources):
    print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
    pgie.set_property("batch-size",number_sources)
tiler_rows=int(math.sqrt(number_sources))
tiler_columns=int(math.ceil((1.0*number_sources)/tiler_rows))
tiler.set_property("rows",tiler_rows)
tiler.set_property("columns",tiler_columns)
tiler.set_property("width", TILED_OUTPUT_WIDTH)
tiler.set_property("height", TILED_OUTPUT_HEIGHT)
sink.set_property("qos",0)

#Set properties of tracker
config = configparser.ConfigParser()
config.read('dsnvanalytics_tracker_config.txt')
config.sections()

for key in config['tracker']:
    if key == 'tracker-width' :
        tracker_width = config.getint('tracker', key)
        tracker.set_property('tracker-width', tracker_width)
    if key == 'tracker-height' :
        tracker_height = config.getint('tracker', key)
        tracker.set_property('tracker-height', tracker_height)
    if key == 'gpu-id' :
        tracker_gpu_id = config.getint('tracker', key)
        tracker.set_property('gpu_id', tracker_gpu_id)
    if key == 'll-lib-file' :
        tracker_ll_lib_file = config.get('tracker', key)
        tracker.set_property('ll-lib-file', tracker_ll_lib_file)
    if key == 'll-config-file' :
        tracker_ll_config_file = config.get('tracker', key)
        tracker.set_property('ll-config-file', tracker_ll_config_file)
    if key == 'enable-batch-process' :
        tracker_enable_batch_process = config.getint('tracker', key)
        tracker.set_property('enable_batch_process', tracker_enable_batch_process)
    if key == 'enable-past-frame' :
        tracker_enable_past_frame = config.getint('tracker', key)
        tracker.set_property('enable_past_frame', tracker_enable_past_frame)
    if key == 'display-tracking-id' :
        tracker_display_tracking_id = config.getint('tracker', key)
        tracker.set_property('display_tracking_id', tracker_display_tracking_id)


print("Adding elements to Pipeline \n")
pipeline.add(pgie)
pipeline.add(tracker)
pipeline.add(sgie3)
pipeline.add(nvanalytics)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(nvvidconv_postosd)
pipeline.add(caps)
pipeline.add(encoder)
pipeline.add(rtppay)

#if is_aarch64():
#    pipeline.add(transform)
pipeline.add(sink)

print("Linking elements in the Pipeline \n")


streammux.link(pgie)
pgie.link(tracker)
tracker.link(sgie3)
sgie3.link(nvanalytics)

nvanalytics.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)



# We link elements in the following order:
# sourcebin -> streammux -> nvinfer -> nvtracker -> nvdsanalytics ->
# nvtiler -> nvvideoconvert -> nvdsosd -> sink
#print("Linking elements in the Pipeline \n")
#streammux.link(queue1)
'''
queue1.link(pgie)
pgie.link(queue2)
queue2.link(tracker)
tracker.link(queue3)
queue3.link(nvanalytics)
nvanalytics.link(queue4)
queue4.link(tiler)
tiler.link(queue5)
queue5.link(nvvidconv)
nvvidconv.link(queue6)
queue6.link(nvosd)
if is_aarch64():
    nvosd.link(queue7)
    queue7.link(transform)
    transform.link(sink)
else:
    nvosd.link(queue7)
    queue7.link(sink)
'''
# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)


    # Start streaming
rtsp_port_num = 8661

server = GstRtspServer.RTSPServer.new()
server.props.service = "%d" % rtsp_port_num
server.attach(None)

factory = GstRtspServer.RTSPMediaFactory.new()
factory.set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, codec))
factory.set_shared(True)
server.get_mount_points().add_factory("/ds-test", factory)

print("\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:%d/ds-test ***\n\n" % rtsp_port_num)




nvanalytics_src_pad=nvanalytics.get_static_pad("src")
if not nvanalytics_src_pad:
    sys.stderr.write(" Unable to get src pad \n")
else:
    nvanalytics_src_pad.add_probe(Gst.PadProbeType.BUFFER, nvanalytics_src_pad_buffer_probe, 0)

# List the sources
print("Now playing...")
for i, source in enumerate(args):
    if (i != 0):
        print(i, ": ", source)

print("Starting pipeline \n")
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)

if name == ‘main’:
sys.exit(main(sys.argv))

Primary Config file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=resnet18_trafficcamnet_pruned.etlt
input-dims=3;544;960;0
labelfile-path=label_vehicle.txt
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
uff-input-blob-name=input_1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=1

[class-attrs-all]
pre-cluster-threshold=0.35
eps=0.7
minBoxes=1

#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=4
detected-min-h=4
minBoxes=3

Per class configurations

[class-attrs-0]
pre-cluster-threshold=0.35
eps=0.7
dbscan-min-score=0.95

[class-attrs-1]
pre-cluster-threshold=0.95
eps=0.7
dbscan-min-score=0.5

[class-attrs-2]
pre-cluster-threshold=0.95
eps=0.6
dbscan-min-score=0.5

[class-attrs-3]
pre-cluster-threshold=0.35
eps=0.6
dbscan-min-score=0.95

Secondary config file:

[property]
gpu-id=0
net-scale-factor=1
model-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.prototxt
model-engine-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
mean-file=…/…/…/…/samples/models/Secondary_VehicleTypes/mean.ppm
labelfile-path=…/…/…/…/samples/models/Secondary_VehicleTypes/labels.txt
int8-calib-file=…/…/…/…/samples/models/Secondary_VehicleTypes/cal_trt.bin
force-implicit-batch-dim=1
batch-size=16

0=FP32 and 1=INT8 mode

network-mode=1
input-object-min-width=64
input-object-min-height=64
model-color-format=1
process-mode=2
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

From below log, looks the pipeline has processed 4 frames and then “Segmentation fault”, right?
could you add some print to narrow down which code line caused the “Segmentation fault”, or use gdb to locate?
Since the pipeline has run for 4 frames, I think the DS config should be right.

from this call trace, it’s related to the strdup () in the merge_classification_output(…) call.
I think this is the code you write, right? could you check if you use wrong string reference there?

Hello Team,

I did not write any code from myside. I have just used the files and models which has been shared by nvidia and tested.
Apart from that i did not added any logic from myside.

which DS sample and what the command you run into this issue, could you share the details?

Here the samples which i have used :

Model samples:
1)Primary detector - TrafficCamNet (which is from ngc)
2)Secondary Classifier - VehicleTypeNet (which is also from ngc)

we have taken this both sample and integrated as per the deepstream python apps and all the config files which we have used has been shared earlier.

command we ran :

gdb --args python3 and after that r and bt to thet backtrace logs.

The description is very vague.
Please share the detailed instruction for the repo.

Could you please refer the thread which i have written on Aug 4th where i have described the whole project with the files which have used .
Please Let me know if you want me to attach again all those files .

please attech them by attached files.
It’s better to be a package, so that I can simply decompress and run the commad to repo it

Hello mchi,

Here are the files which i have used and their usage i have mentioned in readmenvidia file .
Kindly refer those files and let me know if anything needed

pr_sec.py (26.2 KB)
label_vehicle.txt (36 Bytes)
labels.txt (44 Bytes)
dsnvanalytics_pgie_config_vehicle.txt (3.6 KB)
dstest2_sgie3_config.txt (3.9 KB)
readmenvidia (844 Bytes)

Seems the issue you run into is similar as Segmentation Fault in Docker Deepstream Tracker - #12 by nkyriazis , please check if the solution helps you.

Thanks!

Hi mchi,

Issue got resolved when we changed classes in the labels file with the comma separated .
Thanks

1 Like