Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection

I modified the deepstream-test1 code as follows, and now I can run c deepstream-test1 normally

  if(prop.integrated) {
    sink = gst_element_factory_make("nv3dsink", "nv3d-sink");
  } else {
   /* sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer"); */
    sink = gst_element_factory_make("fakesink", "fake-sink");
  }

root@SSTY-001:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app /opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
log.txt (112.8 KB)

That means there are problems with EGL display.

You mentioned you are using A40 GPU. Have you run DeepStream 6.1.1 on A40 before?

Thanks for your reply, I haven’t run DeepStream 6.1.1 on A40 before.

I just modified sink=Gst.ElementFactory.make(“fakesink”, “fakesink”) in ‘root@SSTY-001:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test1#’. Now the python demo can also run normally, but my custom pyton code still cannot run normally, which is the same as the previous error. is it possible that the streaming code based on DeepStream 6.1.1 is not compatible with the deepstraem6.3 operating environment?

2023-09-01 11:06:00 - INFO - Creating streamux
2023-09-01 11:06:00 - INFO - bin_name:source-bin-00
2023-09-01 11:06:00 - INFO - Creating nvvidconv1
2023-09-01 11:06:00 - INFO - Creating filter1
2023-09-01 11:06:00 - INFO - Creating Fakesink
2023-09-01 11:06:00 - INFO - Now playing...
2023-09-01 11:06:00 - INFO - 0:rtsp://xxx/Streaming/Channels/101
2023-09-01 11:06:00 - INFO - Starting pipeline
2023-09-01 11:06:00 - INFO - Decodebin child added: source
2023-09-01 11:06:00 - INFO - Decodebin child added: decodebin0
2023-09-01 11:06:00 - INFO - Decodebin child added: rtph264depay0
2023-09-01 11:06:00 - INFO - Decodebin child added: h264parse0
2023-09-01 11:06:00 - INFO - Decodebin child added: capsfilter0
2023-09-01 11:06:00 - INFO - Decodebin child added: decodebin1
2023-09-01 11:06:00 - INFO - Decodebin child added: rtppcmadepay0
2023-09-01 11:06:00 - INFO - Decodebin child added: nvv4l2decoder0
2023-09-01 11:06:00 - INFO - only decode key frame
2023-09-01 11:06:00 - INFO - Decodebin child added: alawdec0
Cuda failure: status=801
Error(-1) in buffer allocation

** (python3:1061256): CRITICAL **: 11:06:02.744: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
2023-09-01 11:06:02 - INFO - Error: Err:gst-resource-error-quark: Failed to allocate the buffers inside the Nvstreammux output pool (1),Debug:gstnvstreammux.cpp(866): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer

Can you help to confirm that there is a compatibility problem between deepstream6.1.1 and deepstream6.3?

Is this a bug in version 6.3? The code developed based on version 6.1.1 cannot run in the 6.3 environment? The output information of GST_DEBUG=3 is as follows:

root@SSTY-001:/home# GST_DEBUG=3 python3 start_program.py

log.txt (15.1 KB)

If you change the nveglglessink in the python deepstream-test1 to fakesink, can the deepstream-test1 work in your DeepStream 6.3?

Yes,it works fine, I opened a new topic, please guide me here

Is there nveglglessink in your app?

No, I use sink = Gst.ElementFactory.make(“fakesink”, “fakesink”) in my program. Based on the above known information, can we rule out hardware incompatibility?

No. There is no evidence for hardware incompatibility. Seems the DeepStream samples can work but your app failed. Can you check what are the differences? Can you reproduce the failure by modifying any sample application in DeepStream?

I’m new to Deepstream, I might need you to guide me on how to troubleshoot

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The guidence is to modify the DeepStream sample app to reproduce the failure, so we can know what you have done.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.