Setup:
• Hardware Platform (Jetson / GPU): dGPU
• DeepStream Version: 6.0
• NVIDIA GPU Driver Version (valid for GPU only): 470.63.01
• Issue Type( questions, new requirements, bugs): Bug
We are using the new nvstreammux in Deepstream 6 which produces batches with the input resolution for each stream. The pipeline is terminated in an appsink where selected detections are jpeg encoded using nvds_obj_enc_process. As long as all streams have the same resolution this works well. Using streams with different input resolutions on the other hand causes calls to nvds_obj_enc_process to hang while consuming a large amount of memory. This occurs as soon as an object from a stream with higher resolution is encoded after an object from a stream with lower resolution has been encoded.
We realize that the new nvstreammux is still in beta, so it may not be realistic to assume that all Deepstream components work with the heterogeneous batches. There are however many advantages with the new nvstreammux over the previous nvstreammux, so it would be very good if it is possible to find a solution for this problem.
Yes, we have been able to reproduce the error with the deepstream-image-meta-test sample application which is part of the Deepstream 6 development image. Running that application with the new nvstreammux and two input streams, with the first stream having lower resolution than the second, did reproduce the error in our tests. The following command can be using in the Deepstream 6 development image (using streams available in that image):
In our previous tests we have seen this error occurs when an object from a stream with higher resolution is encoded after an object from a stream with lower resolution has been encoded. That is very likely to happen in deepstream-image-meta-test when the low resolution stream is the first argument. With the order of streams switched the error did not occur in our tests. We also tested both cases with the old nvstreammux without any errors.
It would be very good if we could get an update on this issue so we can decide if we should plan for using a different solution than nvds_obj_enc_process for jpeg encoding in Deepstream. The error is possible to reproduce as described in my previous message.
The error is that “streams with different input resolutions … causes calls to nvds_obj_enc_process to hang while consuming a large amount of memory” as I wrote in my first message. The call does not return and the end result is running out of memory.
Could you try reproducing the error with the deepstream-image-meta-test sample as I described?
I can add logging before the nvds_obj_enc_process call if needed and we have done so in our own application. I’m just not sure what more information to log as we already have a clean repro of the error with the deepstream-image-meta-test sample.
Yes, this happens with correct meta data. We have tested encoding the exact same objects with a single stream and the jpeg encoding works without any problems in that case.
As Deepstream 6.0.1 has been released since I reported this bug I have now also verified that this error can be reproduced in the nvcr.io/nvidia/deepstream:6.0.1-devel container.
Inside that container run the following commands to build the deepstream-image-meta-test
cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test
CUDA_VER=11.4 make
Command to reproduce the error USE_NEW_NVSTREAMMUX=yes deepstream-image-meta-test file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
As I wrote earlier the error occurs when nvds_obj_enc_process first encodes an object from a stream with low resolution and then an object from a stream with higher resolution. This happens in the deepstream-image-meta-test sample program when the low resolution stream is the first argument.
I have tested the above command several times and the result is always the same; the process consumes a large amount of memory and finally gets killed due to the system running out of memory.
When switching the order of the input streams there are no errors USE_NEW_NVSTREAMMUX=yes deepstream-image-meta-test file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
Also, both cases work without any errors with the old nvstreammux USE_NEW_NVSTREAMMUX=no deepstream-image-meta-test file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 USE_NEW_NVSTREAMMUX=no deepstream-image-meta-test file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
It would be great if you could run the steps to reproduce this error as described above.
We are still very interested in getting an answer to this issue so we can decide if we should plan for using a different solution than nvds_obj_enc_process for jpeg encoding in Deepstream.
As I wrote before it would be very good to get an answer to this issue so we can decide if we should plan for using a different solution than nvds_obj_enc_process for jpeg encoding in Deepstream.
Detailed steps for reproducing this issue are in my comment above.
Yes, I confirm that we need jpeg encoding in our project. Most convenient would be if we could use nvds_obj_enc_process as we already have working code except for this issue.
@chadtgreen Unfortunately we did not get any help for this issue and the problem still remains in DeepStream 6.1. The repro is as above except for using CUDA_VER=11.6 when building deepstream-image-meta-test.
@kesong We are still very interested in getting an answer to this issue so we can decide if we should plan for using a different solution than nvds_obj_enc_process for jpeg encoding in DeepStream.