Rtsp stream as input + ROI (region of interest) selection + rtps stream

• Hardware Platform:Jetson
• DeepStream Version: 7.1
• JetPack Version: 6.2
• TensorRT Version: TensorRT v100300
• NVIDIA GPU Driver Version (valid for GPU only): 540.4.0

We need to select (we can select it manually as input. example coordinate: 312, 357, 643, 784) an ROI (region of interest) and deepstream inference should be run on that ROI, and the output should be in the rtsp stream with full frame not only ROI

I am using python app deepstream-rtsp-in-rtsp-out.py for testing

so how can i achieve this, is there any plugin or something?

thanks

  1. do you want to change roi at runtime? if so, you can leverage nvmultiurisrcbin to transmit ROI update information to nvdspreprocess plugin. Please refer to the natvie sample deepstream-server. Please read “2. ROI” part in the readme. When the app is running, you only need to send a message to update the ROIs.
  2. noticing “inference should be run on that ROI”, why should the inference output be in the full frame? Taking detection model for exmaple, if you set ROIs, only the objects in the ROIs will be detected and labelled with bbox.
1 Like

Thanks for reply

i am new in this field.

let me clarify my question

so the model should detect objects or estimate poses only within that ROI, but the RTSP output stream should show the full frame.

In the output, the ROI itself should be drawn as a bounding box on the full frame, and all detections from inside that ROI should also be visualized in their correct positions on the full-frame.

input source can be rtsp stream or .mp4
output should be rtsp stream

In short: model should see only ROI, output shows full frame + ROI box + detections inside ROI.

I already have a pipeline that detects poses and ball (my custom model) in single pipeline like dual-PGIE pipeline, I need to add the functionality explained above to this pipeline.

thanks

no, before starting the pipeline, ROI coordinates is written manually to some variable, not in runtime

Hello,

It is not really clear, can you show us your actual pipeline ?

It is working as you described on my side :

What do you have in the output ?

I need to make my pipeline to select an ROI (manually, for example: 312, 357, 643, 784 which is x, y, w, h) to make it detect this pose and ball only within ROI.

Did you already tried the preprocess plugin ? It is its purpose

You can have a look to the deepstream reference application

i tried to run deepstream_python_apps/apps/deepstream-preprocess-test to understand how preprocess plugin works, but i am getting this error:

Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source binsource-bin-00

Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating H264 Encoder

/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-preprocess-test/deepstream_preprocess_test.py:282: Warning: value “4000000” of type ‘guint’ is invalid or out of range for property ‘bitrate’ of type ‘guint’
  encoder.set_property(“bitrate”, bitrate)
Creating H264 rtppay
WARNING: Overriding infer-config batch-size 4  with number of sources  1

Adding elements to Pipeline

  *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8500/ds-test ***

Starting pipeline

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likelyto affect performance or even cause errors.
0:00:00.240192997 1516468 0xaaaaf72f0b90 INFO          nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info fromNvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID =1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.240281768 1516468 0xaaaaf72f0b90 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
0:00:00.246525117 1516468 0xaaaaf72f0b90 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: rtph265depay0

Decodebin child added: h265parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NvMMLiteBlockCreate : Block : BlockType = 279
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff771cd600 (GstCapsFeatures at 0xfffee40852a0)>

Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 1 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 2 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 3 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 4 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 5 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 6 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 7 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 8 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 9 Number of Objects= 0 Vehicle_count= 0 Person_count= 00:00:00.924801855 1516468 0xaaaaf7c0c400 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:00.924835520 1516468 0xaaaaf7c0c400 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop: error: streaming stopped, reason not-linked (-1)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2423): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-linked (-1)
Frame Number= 10 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 11 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 12 Number of Objects= 0 Vehicle_count= 0 Person_count= 0

did you modify the code or configuraion? To rule out the issue of source, can the following cmd run well?

python3 deepstream_preprocess_test.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4

Thanks for reply

yes i modified the code to use software encoder.

yes, I tried running the cmd you gave me, but still same Error.

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-preprocess-test$ python3 deepstream_preprocess_test.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating H264 Encoder
/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-preprocess-test/deepstream_preprocess_test.py:282: Warning: value "4000000" of type 'guint' is invalid or out of range for property 'bitrate' of type 'guint'
  encoder.set_property("bitrate", bitrate)
Creating H264 rtppay
WARNING: Overriding infer-config batch-size 4  with number of sources  1  

Adding elements to Pipeline 


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8500/ds-test ***


Starting pipeline 

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:00.237517933 1540110 0xaaaaaf60d390 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.237616528 1540110 0xaaaaaf60d390 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
0:00:00.243621209 1540110 0xaaaaaf60d390 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: qtdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: aacparse0 

Decodebin child added: avdec_aac0 

Decodebin child added: nvv4l2decoder0 

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff9b74d5a0 (GstCapsFeatures at 0xffff3c026480)>
In cb_newpad

gstname= audio/x-raw
Frame Number= 0 Number of Objects= 8 Vehicle_count= 1 Person_count= 7
Frame Number= 1 Number of Objects= 6 Vehicle_count= 0 Person_count= 6
Frame Number= 2 Number of Objects= 9 Vehicle_count= 2 Person_count= 7
Frame Number= 3 Number of Objects= 11 Vehicle_count= 3 Person_count= 8
Frame Number= 4 Number of Objects= 9 Vehicle_count= 1 Person_count= 8
Frame Number= 5 Number of Objects= 7 Vehicle_count= 2 Person_count= 5
Frame Number= 6 Number of Objects= 6 Vehicle_count= 2 Person_count= 4
Frame Number= 7 Number of Objects= 7 Vehicle_count= 2 Person_count= 5
Frame Number= 8 Number of Objects= 6 Vehicle_count= 2 Person_count= 4
Frame Number= 9 Number of Objects= 7 Vehicle_count= 2 Person_count= 5
0:00:00.941819968 1540110 0xaaaaaff29c00 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:00.941857473 1540110 0xaaaaaff29c00 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-linked (-1)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2423): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-linked (-1)
Frame Number= 10 Number of Objects= 9 Vehicle_count= 5 Person_count= 4
Frame Number= 11 Number of Objects= 8 Vehicle_count= 4 Person_count= 4
Frame Number= 12 Number of Objects= 7 Vehicle_count= 3 Person_count= 4

the issue should be related to your code modificatons. without the modiifacions, can the cmd in my last comment run well? if using software encoder, please refer to the following cmd to modify the code. you can use gst-launch to debug fiirst.

 gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! x264enc ! h264parse ! qtmux ! filesink location=./out.mp4

thanks for reply

  1. without the modifications, cmd in your last comment (python3 deepstream_preprocess_test.py -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4) is not run because there is no Hardware encoder in Jetson Orin Nano

  2. i tried your last cmd to debug first, but i got this error

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-preprocess-test$ gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! x264enc ! h264parse ! qtmux ! filesink location=./out.mp4
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:00.323346837 1572344 0xaaaaf8737e00 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.323480573 1572344 0xaaaaf8737e00 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2026> [UID = 1]: Backend has maxBatchSize 1 whereas 4 has been requested
0:00:00.323496190 1572344 0xaaaaf8737e00 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/deepstream-test1/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_int8.engine failed to match config params, trying rebuild
0:00:00.327570831 1572344 0xaaaaf8737e00 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b4_gpu0_int8.engine opened error
0:07:01.320607058 1572344 0xaaaaf8737e00 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2133> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-7.1/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b4_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 4x3x544x960     Max: 4x3x544x960     
1   OUTPUT kFLOAT output_cov/Sigmoid:0 4x34x60         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 16x34x60        min: 0               opt: 0               Max: 0               

0:07:01.771429966 1572344 0xaaaaf8737e00 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:./dstest1_pgie_config.txt sucessfully
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Redistribute latency...
New clock: GstSystemClock
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

ERROR: Failed to add cudaStream callback for returning input buffers, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:08:40.083025637 1572344 0xaaaaf75cc760 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<nvinfer0> error: Failed to queue input batch for inferencing
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to queue input batch for inferencing
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
Execution ended after 0:01:33.977467808
Setting pipeline to NULL ...
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:08:40.137231941 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<nvinfer0> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2461): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
0:08:40.137398189 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1990> [UID = 1]: Tried to release an outputBatchID which is already with the context
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
0:08:40.144766349 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:08:40.144811855 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Internal data stream error.
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2423): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
streaming stopped, reason error (-5)
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:08:40.145003928 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<nvinfer0> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2461): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
0:08:40.145073756 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1983> [UID = 1]: Tried to release an unknown outputBatchID
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:08:40.151367016 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<nvinfer0> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2461): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
0:08:40.151463565 1572344 0xaaaaf793c300 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1983> [UID = 1]: Tried to release an unknown outputBatchID
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:08:40.162194190 1572344 0xaaaaf75cc760 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<nvinfer0> error: Failed to queue input batch for inferencing
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to queue input batch for inferencing
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbaseparse.c(3681): gst_base_parse_loop (): /GstPipeline:pipeline0/GstH264Parse:h264parse0:
streaming stopped, reason error (-5)
nvstreammux: Successfully handled EOS for source_id=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:08:40.169583887 1572344 0xaaaaf75cc760 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<nvinfer0> error: Failed to queue input batch for inferencing
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to queue input batch for inferencing
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79

here is modified code, you can check

deepstream_preprocess_test.py.zip (5.6 KB)

Please refer to the softeware encoding part. Here is the simplified cmd.

gst-launch-1.0  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder  ! nvvideoconvert  copy-hw=2   ! 'video/x-raw, format=I420' ! x264enc bitrate=1000000 ! filesink location=test.264

please refer to ths faq for the “Failed in mem copy” error.

i successfully runned by fixing “Failed in mem copy” error by adding “copy-hw=2” to this cmd

gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! x264enc ! h264parse ! qtmux ! filesink location=./out.mp4

now how about my modified deepstream_preprocess_test.py

please update “caps”, Gst.Caps.from_string(“video/x-raw(memory:NVMM), format=I420”) to “caps”, Gst.Caps.from_string(“video/x-raw, format=I420”) because x264enc can’t accept hareware memory.

1 Like

thank you very much, now deepstream_python_apps/apps/deepstream-preprocess-test is working.

I have another problem, I added the preprocess plugin to my custom pipeline, but I’m getting this:

(.venv) hbai@ubuntu:~/Documents/sky-metrics-ai$ python new_src/deepstream_test1_rtsp_in_rtsp_out_pose.py -i rtsp://admin:hbai2025@192.168.217.151:554
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating YOLO-POSE inference engines (two PGIEs)
Creating tiler
Creating nvvidconv (pre-tiler)
Creating nvvidconv_rgba (for RGBA conversion)
Creating capsfilter for RGBA
Creating nvosd
Creating nvvidconv_postosd
Creating Software H264 Encoder
/home/hbai/Documents/sky-metrics-ai/new_src/deepstream_test1_rtsp_in_rtsp_out_pose.py:456: Warning: value "4000000" of type 'guint' is invalid or out of range for property 'bitrate' of type 'guint'
  encoder.set_property("bitrate", int(bitrate))
Creating H264 rtppay
Adding elements to Pipeline

*** YOLO-POSE DeepStream: Launched RTSP Streaming at rtsp://localhost:8512/pose-stream ***

Starting pipeline
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:00.266582873 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-hoop-ball-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/hbai/Documents/sky-metrics-ai/models/hoop_ball_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.266700798 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-hoop-ball-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/hbai/Documents/sky-metrics-ai/models/hoop_ball_fp32.engine
0:00:00.277096741 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-hoop-ball-inference> [UID 1]: Load new model:/home/hbai/Documents/sky-metrics-ai/dstest1_pgie_config_sky.txt sucessfully
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.340553849 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-pose-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/hbai/Documents/sky-metrics-ai/models/yolov8n-pose.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.340660734 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-pose-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/hbai/Documents/sky-metrics-ai/models/yolov8n-pose.onnx_b1_gpu0_fp16.engine
0:00:00.345369405 1674196 0xaaaad9a67070 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-pose-inference> [UID 1]: Load new model:yolopose_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: rtph265depay0
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279 
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff7d86e0e0 (GstCapsFeatures at 0xffff180866a0)>
0:00:00.787885961 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.788044272 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.788465505 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
0:00:00.788618695 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
0:00:00.789002391 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.789104795 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
0:00:00.789590702 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.789685042 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 0
0:00:00.815346918 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.815485932 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
Frame 0: Detected 0 poses (from uid=1)
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 1
0:00:00.853765152 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.854052364 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
Frame 1: Detected 0 poses (from uid=1)
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 2
0:00:00.880649222 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.880869679 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
Frame 2: Detected 0 poses (from uid=1)
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 3
0:00:00.906666568 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Frame 3: Detected 0 poses (from uid=1)
0:00:00.907265280 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 4
Frame 4: Detected 0 poses (from uid=1)
0:00:00.932525571 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.933084442 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference
=== probe: pad caps: video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)30/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Frame Number: 5
Frame 5: Detected 0 poses (from uid=1)
0:00:00.954294105 1674196 0xaaaad9a6d2a0 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-pose-inference> warning: nvinfer could not find input layer with name = input_1:0

0:00:00.954868496 1674196 0xaaaad9a6d240 WARN                 nvinfer gstnvinfer.cpp:2010:gst_nvinfer_process_tensor_input:<primary-hoop-ball-inference> warning: nvinfer could not find input layer with name = input_1:0

Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-pose-inference
Warning: gst-stream-error-quark: nvinfer could not find input layer with name = input_1:0
 (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2010): gst_nvinfer_process_tensor_input (): /GstPipeline:pipeline0/GstNvInfer:primary-hoop-ball-inference

here is my config_preprocess.txt file:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# The values in the config file are overridden by values set through GObject
# properties.

[property]
enable=1
target-unique-ids=1
    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
process-on-frame=1
    # if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
    # if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
    # processing width/height at which image scaled
processing-width=960
processing-height=544
scaling-buf-pool-size=6
tensor-buf-pool-size=6
    # tensor shape based on network-input-order
network-input-shape=8;3;544;960
    # 0=RGB, 1=BGR, 2=GRAY

network-color-format=0
    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=input_1:0
    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.003921568
#mean-file=
#offsets=


[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=0;540;900;500

and here is my custom pipeline with dual-PGIE:

#!/usr/bin/env python3

################################################################################
# YOLO-POSE DeepStream RTSP Application (multiple primary GIEs)
# - Runs a pose model (PGIE) and a separate hoop/ball model (another PGIE)
# - Both models operate on the full frame (two primary GIEs chained)
# - Draws pose keypoints only for the pose PGIE (filtered by unique_component_id)
# - Draws bounding boxes for hoop/ball PGIE
# - Outputs an RTSP stream
################################################################################

import sys
sys.path.append("../")
from common.bus_call import bus_call
from common.platform_info import PlatformInfo
import pyds

import math
from ctypes import *
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
from gi.repository import Gst, GstRtspServer, GLib
import cv2
import numpy as np
import argparse

MAX_ELEMENTS_IN_DISPLAY_META = 16

# simple 1-based skeleton pairs (matches code that expects 1-based indices)
skeleton = [
    [16, 14], [14, 12], [17, 15], [15, 13], [12, 13],
    [6, 12], [7, 13], [6, 7], [6, 8], [7, 9],
    [8, 10], [9, 11], [2, 3], [1, 2], [1, 3],
    [2, 4], [3, 5], [4, 6], [5, 7]
]

# MUXER_OUTPUT_WIDTH = 464
# MUXER_OUTPUT_HEIGHT = 848
MUXER_OUTPUT_WIDTH = 1920
MUXER_OUTPUT_HEIGHT = 1080
MUXER_BATCH_TIMEOUT_USEC = 33000
# TILED_OUTPUT_WIDTH = 464
# TILED_OUTPUT_HEIGHT = 848
TILED_OUTPUT_WIDTH = 1280
TILED_OUTPUT_HEIGHT = 720

# YOLO-POSE specific constants
POSE_CLASS_ID = 0

# Set these to match the "gie-unique-id" values in your inference config files.
# e.g. in yolopose_config.txt set gie-unique-id to 1 and in hoop_ball config set to 2
PGIE_POSE_UID = 1
PGIE_HOOPBALL_UID = 2


def parse_pose_from_meta(frame_meta, obj_meta):
    """
    Parse keypoints from obj_meta.mask_params and add display_meta circles/lines.
    Uses frame_meta.source_frame_width/height if present, otherwise falls back to muxer constants.
    """
    if not hasattr(obj_meta, "mask_params") or not obj_meta.mask_params:
        return

    try:
        num_joints = int(obj_meta.mask_params.size / (sizeof(c_float) * 3))
    except Exception:
        return

    try:
        data = obj_meta.mask_params.get_mask_array()
    except Exception:
        return

    frame_w = getattr(frame_meta, "source_frame_width", MUXER_OUTPUT_WIDTH)
    frame_h = getattr(frame_meta, "source_frame_height", MUXER_OUTPUT_HEIGHT)

    # Guard against invalid frame dimensions
    if frame_w <= 0 or frame_h <= 0:
        return

    mask_w = getattr(obj_meta.mask_params, "width", frame_w)
    mask_h = getattr(obj_meta.mask_params, "height", frame_h)
    if mask_w <= 0 or mask_h <= 0:
        return

    gain = min(mask_w / float(frame_w), mask_h / float(frame_h))
    pad_x = (mask_w - frame_w * gain) / 2.0
    pad_y = (mask_h - frame_h * gain) / 2.0

    batch_meta = frame_meta.base_meta.batch_meta
    display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

    # Draw circles for keypoints
    for i in range(num_joints):
        xc = int((data[i * 3 + 0] - pad_x) / gain)
        yc = int((data[i * 3 + 1] - pad_y) / gain)
        conf = data[i * 3 + 2]
        if conf < 0.35:
            continue

        if display_meta.num_circles >= MAX_ELEMENTS_IN_DISPLAY_META:
            display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
            pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

        circle_params = display_meta.circle_params[display_meta.num_circles]
        circle_params.xc = xc
        circle_params.yc = yc
        circle_params.radius = 4
        circle_params.circle_color.red = 1.0
        circle_params.circle_color.green = 1.0
        circle_params.circle_color.blue = 1.0
        circle_params.circle_color.alpha = 1.0
        circle_params.has_bg_color = 1
        circle_params.bg_color.red = 0.0
        circle_params.bg_color.green = 0.0
        circle_params.bg_color.blue = 1.0
        circle_params.bg_color.alpha = 1.0
        display_meta.num_circles += 1

    # Draw skeleton lines
    for pair in skeleton:
        a, b = pair
        idx_a = (a - 1) * 3
        idx_b = (b - 1) * 3
        if idx_a + 2 >= len(data) or idx_b + 2 >= len(data):
            continue
        x1 = int((data[idx_a + 0] - pad_x) / gain)
        y1 = int((data[idx_a + 1] - pad_y) / gain)
        c1 = data[idx_a + 2]
        x2 = int((data[idx_b + 0] - pad_x) / gain)
        y2 = int((data[idx_b + 1] - pad_y) / gain)
        c2 = data[idx_b + 2]

        if c1 < 0.35 or c2 < 0.35:
            continue

        if display_meta.num_lines >= MAX_ELEMENTS_IN_DISPLAY_META:
            display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
            pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

        line_params = display_meta.line_params[display_meta.num_lines]
        line_params.x1 = x1
        line_params.y1 = y1
        line_params.x2 = x2
        line_params.y2 = y2
        line_params.line_width = 2
        line_params.line_color.red = 0.0
        line_params.line_color.green = 1.0
        line_params.line_color.blue = 0.0
        line_params.line_color.alpha = 1.0
        display_meta.num_lines += 1


def pose_src_pad_buffer_probe(pad, info, u_data):
    """Probe attached after RGBA caps (so frame is available to draw via OpenCV if needed).
    This probe iterates all objects attached to the frame and:
      - draws pose keypoints only for objects produced by the pose PGIE
      - draws bounding boxes for hoop/ball PGIE results
    """
    try:
        caps = pad.get_current_caps()
        if caps:
            print("=== probe: pad caps:", caps.to_string())
    except Exception as e:
        print("Warning: unable to get pad caps in probe:", e)

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        return Gst.PadProbeReturn.OK

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    if not batch_meta:
        return Gst.PadProbeReturn.OK

    l_frame = batch_meta.frame_meta_list

    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number = frame_meta.frame_num
        print(f"Frame Number: {frame_number}")

        # Acquire a CPU-accessible view of the frame (RGBA) for possible OpenCV drawing and final write-back
        try:
            nv_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
            frame_image_rgba = np.asarray(nv_frame, order='C')
        except Exception as e:
            print("Warning: could not get_nvds_buf_surface:", e)
            try:
                l_frame = l_frame.next
            except StopIteration:
                break
            continue

        # convert to BGR for any OpenCV drawing (we don't rely on OpenCV for pose drawing: we use display meta)
        try:
            frame_bgr = cv2.cvtColor(frame_image_rgba, cv2.COLOR_RGBA2BGR)
        except Exception as e:
            print("Warning: color conversion failed:", e)
            try:
                l_frame = l_frame.next
            except StopIteration:
                break
            continue

        l_obj = frame_meta.obj_meta_list
        num_poses = 0

        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            uid = getattr(obj_meta, 'unique_component_id', None)
            cls_id = getattr(obj_meta, 'class_id', None)

            # Debug
            try:
                left = int(obj_meta.rect_params.left)
                top = int(obj_meta.rect_params.top)
                w = int(obj_meta.rect_params.width)
                h = int(obj_meta.rect_params.height)
            except Exception:
                left = top = w = h = None

            mask_size = None
            try:
                if hasattr(obj_meta, 'mask_params') and obj_meta.mask_params:
                    mask_size = getattr(obj_meta.mask_params, 'size', None)
            except Exception:
                mask_size = 'err'

            print(f"DEBUG OBJ: uid={uid} class_id={cls_id} rect=({left},{top},{w},{h}) mask_size={mask_size}")

            # If this object came from the pose PGIE, draw pose keypoints via display meta
            if uid == PGIE_POSE_UID and cls_id == POSE_CLASS_ID:
                num_poses += 1
                parse_pose_from_meta(frame_meta, obj_meta)

            # If this object came from the hoop/ball PGIE, draw a bbox (full-frame detector)
            elif uid == PGIE_HOOPBALL_UID:
                # create display meta and add a rect
                batch_meta_local = frame_meta.base_meta.batch_meta
                dmeta = pyds.nvds_acquire_display_meta_from_pool(batch_meta_local)
                rect = dmeta.rect_params[dmeta.num_rects]
                if left is not None:
                    rect.left = left
                    rect.top = top
                    rect.width = w
                    rect.height = h
                rect.border_width = 3
                rect.border_color.red = 1.0
                rect.border_color.green = 0.0
                rect.border_color.blue = 0.0
                rect.border_color.alpha = 1.0
                dmeta.num_rects += 1
                pyds.nvds_add_display_meta_to_frame(frame_meta, dmeta)

            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        print(f"Frame {frame_number}: Detected {num_poses} poses (from uid={PGIE_POSE_UID})")

        # Convert BGR back to RGBA and write into the nv frame buffer (in-place)
        try:
            frame_rgba_out = cv2.cvtColor(frame_bgr, cv2.COLOR_BGR2RGBA)
            np.copyto(frame_image_rgba, frame_rgba_out)
        except Exception as e:
            print("Warning: could not write back modified frame:", e)

        try:
            l_frame = l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK


# --- rest of GStreamer pipeline setup (largely unchanged from your working pipeline) ---

def cb_newpad(decodebin, decoder_src_pad, data):
    print("In cb_newpad")
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)

    print("gstname=", gstname)
    if gstname.find("video") != -1:
        print("features=", features)
        if features.contains("memory:NVMM"):
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write("Error: Decodebin did not pick nvidia decoder plugin.\n")


def decodebin_child_added(child_proxy, Object, name, user_data):
    print("Decodebin child added:", name)
    if name.find("decodebin") != -1:
        Object.connect("child-added", decodebin_child_added, user_data)

    if ts_from_rtsp:
        if name.find("source") != -1:
            pyds.configure_source_for_ntp_sync(hash(Object))


def create_source_bin(index, uri):
    print("Creating source bin")
    bin_name = "source-bin-%02d" % index
    print(bin_name)
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write("Unable to create source bin\n")

    uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write("Unable to create uri decode bin\n")

    uri_decode_bin.set_property("uri", uri)
    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write("Failed to add ghost pad in source bin\n")
        return None
    return nbin


def main(args):
    number_sources = len(args)

    Gst.init(None)

    print("Creating Pipeline")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write("Unable to create Pipeline\n")

    print("Creating streamux")
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write("Unable to create NvStreamMux\n")

    pipeline.add(streammux)

    for i in range(number_sources):
        print("Creating source_bin", i)
        uri_name = args[i]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin\n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.request_pad_simple(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin\n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin\n")
        srcpad.link(sinkpad)

    print("Creating YOLO-POSE inference engines (two PGIEs)")
    if gie == "nvinfer":
        pgie_pose = Gst.ElementFactory.make("nvinfer", "primary-pose-inference")
        pgie_hoop_ball = Gst.ElementFactory.make("nvinfer", "primary-hoop-ball-inference")
    else:
        pgie_pose = Gst.ElementFactory.make("nvinferserver", "primary-pose-inference")
        pgie_hoop_ball = Gst.ElementFactory.make("nvinferserver", "primary-hoop-ball-inference")

    if not pgie_pose or not pgie_hoop_ball:
        sys.stderr.write("Unable to create pgie elements\n")

    print("Creating tiler")
    tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write("Unable to create tiler\n")

    print("Creating nvvidconv (pre-tiler)")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write("Unable to create nvvidconv\n")

    print("Creating nvvidconv_rgba (for RGBA conversion)")
    nvvidconv_rgba = Gst.ElementFactory.make("nvvideoconvert", "convertor_rgba")
    if not nvvidconv_rgba:
        sys.stderr.write("Unable to create nvvidconv_rgba\n")

    print("Creating capsfilter for RGBA")
    caps_rgba = Gst.ElementFactory.make("capsfilter", "caps_rgba")
    if not caps_rgba:
        sys.stderr.write("Unable to create caps_rgba\n")
    else:
        caps_rgba.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA"))

    print("Creating nvosd")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write("Unable to create nvosd\n")

    print("Creating nvvidconv_postosd")
    nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
    if not nvvidconv_postosd:
        sys.stderr.write("Unable to create nvvidconv_postosd\n")

    preprocess = Gst.ElementFactory.make("nvdspreprocess", "preprocess-plugin")
    if not preprocess:
        sys.stderr.write("Unable to create preprocess\n")

    pgie_pose.set_property("input-tensor-meta", True)
    pgie_hoop_ball.set_property("input-tensor-meta", True)
    preprocess.set_property("config-file", "config_preprocess.txt")


    # Set properties for hardware acceleration (optional)
    # nvvidconv.set_property('nvbuf-memory-type', 0)
    nvvidconv.set_property('copy-hw', 2)
    # nvvidconv_rgba.set_property('nvbuf-memory-type', 0)
    nvvidconv_rgba.set_property('copy-hw', 2)
    # nvvidconv_postosd.set_property('nvbuf-memory-type', 0)
    nvvidconv_postosd.set_property('copy-hw', 2)

    # Create caps filter (I420 before encoder)
    caps = Gst.ElementFactory.make("capsfilter", "filter")
    caps.set_property("caps", Gst.Caps.from_string("video/x-raw, format=I420"))

    # Create encoder
    if codec == "H264":
        encoder = Gst.ElementFactory.make("x264enc", "encoder")
        print("Creating Software H264 Encoder")
    elif codec == "H265":
        encoder = Gst.ElementFactory.make("x265enc", "encoder")
        print("Creating Software H265 Encoder")

    if not encoder:
        sys.stderr.write("Unable to create encoder\n")

    # safe set bitrate (x264enc expects kbps integer)
    try:
        encoder.set_property("bitrate", int(bitrate))
    except Exception:
        print("Warning: failed to set encoder bitrate property")

    # Create RTP payload
    if codec == "H264":
        rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
        print("Creating H264 rtppay")
    elif codec == "H265":
        rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
        print("Creating H265 rtppay")
    if not rtppay:
        sys.stderr.write("Unable to create rtppay\n")

    # Create UDP sink
    updsink_port_num = 5400
    sink = Gst.ElementFactory.make("udpsink", "udpsink")
    if not sink:
        sys.stderr.write("Unable to create udpsink\n")

    sink.set_property("host", "224.224.255.255")
    sink.set_property("port", updsink_port_num)
    sink.set_property("async", False)
    sink.set_property("sync", 1)

    # Set streammux properties
    streammux.set_property("width", MUXER_OUTPUT_WIDTH)
    streammux.set_property("height", MUXER_OUTPUT_HEIGHT)
    streammux.set_property("batch-size", number_sources)
    streammux.set_property("batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC)
    streammux.set_property('nvbuf-memory-type', 0)

    if ts_from_rtsp:
        streammux.set_property("attach-sys-ts", 0)

    # Configure inference engines
    if gie == "nvinfer":
        pgie_pose.set_property("config-file-path", "yolopose_config.txt")
        pgie_hoop_ball.set_property("config-file-path", "dstest1_pgie_config.txt")
    else:
        pgie_pose.set_property("config-file-path", "yolopose_inferserver_config.txt")
        pgie_hoop_ball.set_property("config-file-path", "dstest1_pgie_config.txt")

    # ensure batch-size on primary matches number of sources
    try:
        pgie_batch_size = pgie_pose.get_property("batch-size")
        if pgie_batch_size != number_sources:
            print(f"WARNING: Overriding infer-config batch-size {pgie_batch_size} with number of sources {number_sources}")
            pgie_pose.set_property("batch-size", number_sources)
    except Exception:
        pass

    print("Adding elements to Pipeline")
    tiler_rows = int(math.sqrt(number_sources))
    tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
    tiler.set_property("rows", tiler_rows)
    tiler.set_property("columns", tiler_columns)
    tiler.set_property("width", TILED_OUTPUT_WIDTH)
    tiler.set_property("height", TILED_OUTPUT_HEIGHT)
    sink.set_property("qos", 0)

    # Add elements to pipeline
    pipeline.add(preprocess)
    pipeline.add(pgie_pose)
    pipeline.add(pgie_hoop_ball)
    pipeline.add(nvvidconv)
    pipeline.add(nvvidconv_rgba)
    pipeline.add(caps_rgba)
    pipeline.add(tiler)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(rtppay)
    pipeline.add(sink)

    # Link elements sequentially: streammux -> pgie_pose -> pgie_hoop_ball -> nvvidconv -> ...
    if not streammux.link(preprocess):
        sys.stderr.write("Failed to link streammux -> preprocess\n")
    if not preprocess.link(pgie_pose):
        sys.stderr.write("Failed to link preprocess -> pgie_pose\n")
    if not pgie_pose.link(pgie_hoop_ball):
        sys.stderr.write("Failed to link pgie_pose -> pgie_hoop_ball\n")
    if not pgie_hoop_ball.link(nvvidconv):
        sys.stderr.write("Failed to link pgie_hoop_ball -> nvvidconv\n")
    if not nvvidconv.link(nvvidconv_rgba):
        sys.stderr.write("Failed to link nvvidconv -> nvvidconv_rgba\n")
    if not nvvidconv_rgba.link(caps_rgba):
        sys.stderr.write("Failed to link nvvidconv_rgba -> caps_rgba\n")
    if not caps_rgba.link(tiler):
        sys.stderr.write("Failed to link caps_rgba -> tiler\n")
    if not tiler.link(nvosd):
        sys.stderr.write("Failed to link tiler -> nvosd\n")
    if not nvosd.link(nvvidconv_postosd):
        sys.stderr.write("Failed to link nvosd -> nvvidconv_postosd\n")
    if not nvvidconv_postosd.link(caps):
        sys.stderr.write("Failed to link nvvidconv_postosd -> caps\n")
    if not caps.link(encoder):
        sys.stderr.write("Failed to link caps -> encoder\n")
    if not encoder.link(rtppay):
        sys.stderr.write("Failed to link encoder -> rtppay\n")
    if not rtppay.link(sink):
        sys.stderr.write("Failed to link rtppay -> sink\n")

    # Create event loop and feed gstreamer bus messages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Add probe to get pose estimation results
    rgba_src_pad = caps_rgba.get_static_pad("src")
    if not rgba_src_pad:
        sys.stderr.write("Unable to get src pad of caps_rgba\n")
    else:
        rgba_src_pad.add_probe(Gst.PadProbeType.BUFFER, pose_src_pad_buffer_probe, 0)

    # Start RTSP streaming
    rtsp_port_num = 8512
    server = GstRtspServer.RTSPServer.new()
    server.props.service = "%d" % rtsp_port_num
    server.attach(None)

    factory = GstRtspServer.RTSPMediaFactory.new()
    factory.set_launch(
        '( udpsrc name=pay0 port=%d buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 " )'
        % (updsink_port_num, codec)
    )
    factory.set_shared(True)
    server.get_mount_points().add_factory("/stream", factory)

    print(f"\n*** YOLO-POSE DeepStream: Launched RTSP Streaming at rtsp://localhost:{rtsp_port_num}/stream ***\n")

    # Start pipeline
    print("Starting pipeline")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except BaseException:
        pass

    # Cleanup
    pipeline.set_state(Gst.State.NULL)


def parse_args():
    parser = argparse.ArgumentParser(description='YOLO-POSE DeepStream RTSP Application')
    parser.add_argument("-i", "--input",
                       help="Path to input stream (RTSP URL, video file, etc.)",
                       nargs="+", required=True)
    parser.add_argument("-g", "--gie", default="nvinfer",
                       help="Choose GPU inference engine type nvinfer or nvinferserver, default=nvinfer",
                       choices=['nvinfer', 'nvinferserver'])
    parser.add_argument("-c", "--codec", default="H264",
                       help="RTSP Streaming Codec H264/H265, default=H264",
                       choices=['H264', 'H265'])
    parser.add_argument("-b", "--bitrate", default=4000000,
                       help="Set the encoding bitrate", type=int)
    parser.add_argument("--rtsp-ts", action="store_true", default=False, dest='rtsp_ts',
                       help="Attach NTP timestamp from RTSP source")

    if len(sys.argv) == 1:
        parser.print_help(sys.stderr)
        sys.exit(1)

    args = parser.parse_args()
    global codec, bitrate, stream_path, gie, ts_from_rtsp
    gie = args.gie
    codec = args.codec
    bitrate = args.bitrate
    stream_path = args.input
    ts_from_rtsp = args.rtsp_ts
    return stream_path


if __name__ == '__main__':
    stream_path = parse_args()
    sys.exit(main(stream_path))

is it possible to use preprocess plugin with two separate pgies in one pipeline

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.