Error: gst-library-error-quark:

im using RabbitMQ messenger , deepstream-test4 app


python3 deepstream_test_4.py  -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_amqp_proto.so  --conn-str="localhost;5672;guest" -s 1

Error

Playing file /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Error: gst-library-error-quark: Could not configure supporting library. (5): gstnvmsgbroker.c(388): legacy_gst_nvmsgbroker_start (): /GstPipeline:pipeline0/GstNvMsgBroker:nvmsg-broker:
unable to connect to broker library

Hi, This looks like a deepstream issue. We recommend you to raise it to the respective forum.

Thanks!

i was able to fix above errors , but got held with below error . could you please help

python3 deepstream_test_4.py -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 -c cfg_amqp.txt -p /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_amqp_proto.so --conn-str="localhost;5672;guest" -s 0

Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

0:00:02.423969665 14925      0x557b210 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:02.424070105 14925      0x557b210 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.424914103 14925      0x557b210 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest4_pgie_config.txt sucessfully
Frame Number = 0 Vehicle Count = 4 Person Count = 2
Frame Number = 1 Vehicle Count = 4 Person Count = 2
0:00:02.558623557 14925      0x2c39850 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:02.558641307 14925      0x2c39850 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number = 2 Vehicle Count = 4 Person Count = 2

how to i check the response from the producer ?? do they have QUEUE to check?

Which platform you are running on? you need to connect monitor for nveglglessink to work, you can either run from desktop or run through remote terminal, but export DISPLAY firstly before run,
export DISPLAY=:0 or 1 //xrandr to check display if exported
if you run with Tesla cards, it does not have display port or hdmi port, you can either follow this
Deepstream/FAQ - eLinux.org 5A to setup virtual display or you can use rtspstreamming to see the output.

Yes. you can create queue to bind the queue with exchange with specified routing key.
check this,
sources/libs/amqp_protocol_adaptor/README
Test & verify messages published: