Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - Tesla T4 GPU • DeepStream Version - 7.0.0 • JetPack Version (valid for Jetson only) • TensorRT Version - 8.6.1.6-1+cuda12.0 • NVIDIA GPU Driver Version (valid for GPU only) - Build cuda_12.2.r12.2/compiler.33191640_0 • Issue Type( questions, new requirements, bugs)
it throws the following error when using a batch-size anything greater than 1 in nvstreammux
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
gst-launch-1.0
I think this is not related too much with the batch-size in nvstreammux, but with the fact that you are trying to convert a buffer that is batched into host memory, if you insert a nvmultistreamtiler before nvdsosd which makes a composition of the 4 frames you will see that the problem goes away. You can also use a nvstreamdemux to “debatch” the batch back to the original sources and then compose them together, depends on your use case
Hi Allen,
Thanks for the reply. nvmultistreamtiler did resolve the error for us. We are getting all 4 streams in a single image as a 2D tile. Now is there a way to send one of these frames (with bounding boxes) to some endpoint, when there is a detection? Or we have to use nvstreamdemux for that?
Please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3 for the correct pipeline and Gst-nvstreamdemux — DeepStream documentation for the nvstreamdemux use cases.
but when I add an rtsp link instead of file:///workspace/full_auto_AK47.mp4, it doesn’t work. I am sharing the log file below for you to review. generated using GST_DEBUG=*:6. logs.log (5.1 MB)
0:00:07.921203552 e[33m 104e[00m 0x62c0417c93b0 e[37mDEBUG e[00m e[00m basesink gstbasesink.c:1280:gst_base_sink_query_latency:<appsink0>e[00m latency query failed but we are not live
Could you try to set the async and sync to False for the appsink plugin?
Could you share the uri, also have you tried other elements like nvurisrcbin. It is also a good idea to check if the rtsp stream manually with rtspsrc to make sure you are able to read it
Hi @allan.navarro
Thanks for replying. Here is the uri rtsp://admin:Cisco1947@3.129.17.212:10554/Streaming/Channels/101, it will be live for few more hours(1-2hrs).
No, we have not tried nvurisrcbin yet. But dont think it is the issue because it is working fine when running in terminal with fake sink.
The pipeline you gave works fine in terminal and it shows timer running. But when we run the same in python with a bit of modification, we don’t get any output. We are putting the code for your reference. it is not letting me upload python code so uploading in txt.
I have tried your pipeline with gst-launch-1.0 in your code. It worked normally. It may be a problem that the other code causes, which you need to check yourself.