Encountering Issues with scaling-compute-hw and compute-hw in Plugins on Jetson Orin 8GB - Deepstream 6.4

Hello NVIDIA Community,

I’m currently facing a challenge with my Jetson Orin 8GB setup while trying to utilize scaling-compute-hw and compute-hw for plugins ranging from nvinfer to nvvideoconvert. Below are the details of my setup and the issues I’m encountering:

  • Hardware Platform: Jetson Orin 8GB
  • DeepStream Version: 6.4
  • JetPack Version: 36.2.0 / 6.0DP
  • TensorRT Version: 8.6.2.3

Problem Description:

When attempting to use scaling-compute-hw and compute-hw, I encounter errors indicating issues with input CUDA Memory not being supported on Jetson for output-io-mode=dmabuf-import, as well as warnings about buffer sizes and memory tags. Here are the relevant error logs

0:02:06.754825534     1 0xaaaae3eb2f60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:564:gst_v4l2_buffer_pool_import_dmabuf:<encoder-2:pool:sink> Input CUDA Memory not supported on Jeston for output-io-mode=dmabuf-import,element = ��
0:02:06.754865471     1 0xaaaae3eb2f60 WARN              bufferpool gstbufferpool.c:1246:default_reset_buffer:<encoder-2:pool:sink> Buffer 0xfffe5c006ea0 without the memory tag has maxsize (0) that is smaller than the configured buffer pool size (64). The buffer will be not be reused. This is most likely a bug in this GstBufferPool subclass
0:02:06.754879168     1 0xaaaae3eb2f60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:2420:gst_v4l2_buffer_pool_process:<encoder-2:pool:sink> failed to prepare data
0:02:06.754894080     1 0xaaaae3eb2f60 WARN            v4l2videoenc gstv4l2videoenc.c:1728:gst_v4l2_video_enc_handle_frame:<encoder-2> error: Failed to process frame.
0:02:06.754900481     1 0xaaaae3eb2f60 WARN            v4l2videoenc gstv4l2videoenc.c:1728:gst_v4l2_video_enc_handle_frame:<encoder-2> error: Maybe be due to not enough memory or failing driver
0:02:06.755432724     1 0xaaaae4703360 ERROR         v4l2bufferpool gstv4l2bufferpool.c:564:gst_v4l2_buffer_pool_import_dmabuf:<encoder-0:pool:sink> Input CUDA Memory not supported on Jeston for output-io-mode=dmabuf-import,element = ��
ERROR — Received error from gstreamer pipeline with message: /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/gstv4l2videoenc.c(1728): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/GstBin:rtsp-sink-bin-2/nvv4l2h264enc:encoder-2:

Additionally, without using scaling, I occasionally run into another error related to nvvideoconvert, indicating that the input buffer is not NvBufSurface:

0:00:15.951443298     1 0xaaaabe677000 ERROR         nvvideoconvert gstnvvideoconvert.c:3825:gst_nvvideoconvert_transform: Input buffer is not NvBufSurface
...
0:00:15.951552134     1 0xaaaabe6771e0 ERROR         nvvideoconvert gstnvvideoconvert.c:4208:gst_nvvideoconvert_transform: buffer transform failed

Steps to Reproduce:

  1. Use a base application from the DeepStream Python samples, for example, test1.
  2. Run the application with scaling-hw set to 1.

I’m looking for insights or solutions from the community regarding these errors. Is there a specific configuration or workaround to successfully use these features without encountering these errors?

Thank you in advance for your support and guidance.

Why you need to change the compute-hw? Do you change nvbuf-memory-type? Suppose video encoder need nvbuf-mem-surface-array memory on Jetson.

I’m looking to adjust the compute-hw setting to maintain consistency with a configuration that worked well for me in DeepStream 6.2. I recently upgraded to DeepStream 6.4, hoping to retain the same settings for a seamless transition, but I encountered the issues described.

To clarify, are you suggesting that I should specifically set the video encoder’s memory type to nvbuf-mem-surface-array to address the issues I’ve encountered? If so, could you please guide me on correctly applying this change in the context of DeepStream 6.4 with python bindings? I’m keen to understand if this approach will enable me to use the scaling-compute-hw and compute-hw settings without running into the errors I mentioned.

Thank you again for your guidance. I appreciate your help in navigating these configurations!

Hello Kesong,

I found that the pipeline issue, which sometimes leads to slowdowns and eventual drops, seems to occur when nvvideoconvert is used right after nvstreamdemux. This specific arrangement in my pipeline sporadically triggers the following errors, leading to the performance and crash the deepstream pipeline

ERROR nvvideoconvert gstnvvideoconvert.c:3825:gst_nvvideoconvert_transform: Input buffer is not NvBufSurface
ERROR nvvideoconvert gstnvvideoconvert.c:4208:gst_nvvideoconvert_transform: buffer transform failed

This behavior introduces instability in the pipeline, significantly impacting its reliability. The occasional crash further complicates the situation, making maintaining a consistent performance baseline challenging.

Here is a piece of my gstreamer pipeline

After further experimentation and considering the insights provided, I suspect the intermittent issues we’re experiencing with nvvideoconvert might not solely be rooted in our pipeline configuration. Interestingly, the behaviour is sporadic: sometimes, the pipeline operates as expected without any issues, and at other times, we encounter the previously mentioned errors, leading to slowdowns and drops. This inconsistency led me to ponder the stability of our current DeepStream version.

We are currently utilizing DeepStream 6.4 Developer Preview (6.4DP). Given the nature of developer previews, I understand they are not the final, general-release versions and may contain bugs that will be addressed in future releases. The fact that our pipeline works flawlessly at times suggests that the configuration and setup might indeed be correct, and the issues could be attributed to the 6.4DP version itself.

Could you or anyone familiar with the development roadmap confirm whether these issues are known within the 6.4DP context? More importantly, should we expect resolutions to such problems in the upcoming release of DeepStream 6.4? Understanding this will help us determine whether we should continue seeking a bug within our pipeline or if patience until the next DeepStream release is more prudent.

I have attached images showcasing instances where the pipeline functions correctly, further suggesting that our configuration can work under certain conditions.

Your guidance on whether these issues are anticipated to be resolved in the general release or if further action is recommended on our part would be greatly appreciated. Thank you for your continued support and for fostering such a helpful community.



There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Moved to DeepStream forum.

Can you reproduce the issue with DeepStream Python sample? Can you share the reproduce of the issue?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.