Error: Cuda failure in createTexture when using get_nvds_buf_surface on DGX SPARK with DeepStream 8.0

Environment Details:

  • Hardware: NVIDIA DGX SPARK

  • DeepStream Version: 8.0 (container)

  • Application: Modified version of deepstream-imagedata-multistream sample

Issue Description:

Hello,

I previously opened a topic for a similar issue on AGX Thor with JetPack 7.0 and DeepStream 8.0: Error mapping buffer to CPU with get_nvds_buf_surface on AGX Thor JP7.0 DS 8.0

In that topic, the solution provided worked for Thor. Now, I’m encountering the same problem on DGX SPARK, and I tried the same fix (e.g., updating the buffer mapping logic as suggested). However, I’m now getting the following errors when trying to map the buffer to CPU using get_nvds_buf_surface:

Cuda failure: status=1 in createTexture at line 850
0:00:01.351990308 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
0:00:01.352047236 757045 0x40b242d0 WARN nvinfer gstnvinfer.cpp:2435:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:01.352053556 757045 0x40b242d0 WARN nvinfer gstnvinfer.cpp:2435:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Cuda failure: status=1 in createTexture at line 850
0:00:01.352224373 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2435): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
Cuda failure: status=1 in createTexture at line 850
Exiting app

0:00:01.352424790 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
Cuda failure: status=1 in createTexture at line 850
0:00:01.352798489 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
Cuda failure: status=1 in createTexture at line 850
0:00:01.353415261 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
nvstreammux: Successfully handled EOS for source_id=0
Cuda failure: status=1 in createTexture at line 850
0:00:01.354560404 757045 0x40b242d0 ERROR nvvideoconvert gstnvvideoconvert.c:4345:gst_nvvideoconvert_transform: buffer transform failed
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found

The pipeline crashes with these CUDA texture creation failures.

What I’ve Tried:

As a workaround, I replaced get_nvds_buf_surface with a custom method to extract the frame directly from the buffer using NumPy:

frame = np.ndarray((caps.get_structure(0).get_value('height'),
                    caps.get_structure(0).get_value('width'), 3),
                   buffer=buf.extract_dup(0, buf.get_size()),
                   dtype=np.uint8)

However, to make this work, I need to pull the buffer to CPU memory (not NVMM), which requires an extra conversion step. Then, to get the frame with bounding boxes after OSD, I use another probe that pulls from NVMM back to CPU.

This results in a inefficient flow: GPU → CPU (probe for raw imagews) → GPU (OSD) → CPU (probe for frame with bounding boxes).

While this workaround functions, it means I can’t use DeepStream’s default get_nvds_buf_surface function, which defeats the purpose of using the SDK’s built-in tools.

Questions:

  1. Is there a proper fix for get_nvds_buf_surface on DGX SPARK with DeepStream 8.0 to avoid these CUDA errors?

  2. Alternatively, how can I adapt my buffer extraction code to use CuPy for direct GPU buffer access (without CPU copies)? Something like pulling the frame into a CuPy array from the GPU buffer.

Any insights or patches would be greatly appreciated. I can provide a minimal reproducible code if needed.

Thanks!

1 Like

Any updates?

sorry for the late reply! what are the driver and CUDA version? are you testing with deepstream:8.0-triton-dgx-spark docker? do you mean testing with the modified deepstream-imagedata-multistream.py mentioned in the first link, the application reports “Cuda failure: status=1 in createTexture at line 850”?

what are the driver and CUDA version?

Driver version (from nvidia-smi) : 580.126.09
Cuda version : 13.0

are you testing with deepstream:8.0-triton-dgx-spark docker? :

Yes

do you mean testing with the modified deepstream-imagedata-multistream.py mentioned in the first link?

Yes

the application reports “Cuda failure: status=1 in createTexture at line 850”?

Yes

I will continue to update this topic until I receive a meaningful response.

I have the same issue. Is there any update about this topic ?

Any update? i have similar problem

This "createTexture " issue can be reproduced. we are investigating. then get back to you.

@mertaydogan06 @diyaralma457 @mbatuhan Sorry for the late reply!
we have fixed this issue internally. Below is a workaround.

  1. rebuild the bindings according to the link. especially please run the following cmd before run ‘python3 -m build’. then reinstall the new whl in bindings/dist.
export CMAKE_ARGS="-DIS_SBSA=on"
  1. apply the patch patch.txt (1.4 KB) in this code line. Here is my test log running log.txt (4.3 KB)