Pipeline crashes after an illegal memory access

• Hardware Platform (Jetson / GPU)
Jetson Orin Nano 8GB
• DeepStream Version
7.1
• JetPack Version (valid for Jetson only)
6.1
• TensorRT Version
10.3

Hello,
I’ve made a pipeline starting from the available example deepstream-test4, and changed the source so that I could use my USB cam, a Intel Realsense d435i, and now it looks like this

v4l2src -> caps_v4l2src -> nvvidconvsrc -> caps_nvvidconv -> nvstreammux -> ...

(the rest is unchanged)

Surprisingly it works but it gives out these warnings:

0:00:00.660332064 3145434 0xaaab0d480c60 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/concept/jackal-detection/src/detector.engine
0:00:00.668705408 3145434 0xaaab0d480c60 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test4/enf_pgie_config.txt sucessfully
Running...
mimetype is video/x-raw
0:00:00.693660896 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:842:gst_v4l2_buffer_pool_start:<usb-cam-source:pool0:src> Uncertain or not enough buffers, enabling copy threshold
0:00:00.700636672 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:1373:gst_v4l2_buffer_pool_dqbuf:<usb-cam-source:pool0:src> Driver should never set v4l2_buffer.field to ANY
0:00:00.700692736 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700740896 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700765248 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700786432 3145434 0xaaab0d46ea40 WARN          v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:01.846046848 3145434 0xaaab0d46ea40 WARN                 v4l2src gstv4l2src.c:1123:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 10 - ts: 0:00:01.080397960
0:00:01.903180928 3145434 0xaaab0d46ea40 WARN                 v4l2src gstv4l2src.c:1123:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 1 - ts: 0:00:01.180462472

After some minutes it crashes:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cuda Driver (an illegal memory access was encountered)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:06:20.180545376 3151068 0xaaaadf69a980 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:dstest4-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
0:06:20.189456960 3151068 0xaaaadf69ad80 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-nvinference-engine> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:06:20.189505792 3151068 0xaaaadf69ad80 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-nvinference-engine> error: Buffer conversion failed
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
GPUassert: an illegal memory access was encountered /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/cuDCFv2/cuDCFFrameTransformTexture.cu 693

!![Exception] GPUassert failed
An exception occurred. GPUassert failed
gstnvtracker: Low-level tracker lib returned error 1
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[ERROR] 2025-02-17 14:33:57 Error destroying cuda device: !"�J��
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[...]
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[ERROR] 2025-02-17 14:33:57 Error destroying cuda device: y��J��
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[...]
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
terminate called after throwing an instance of 'nv::cuda::RuntimeException'
  what():  cudaErrorIllegalAddress: 
Aborted (core dumped)

Any idea on how to proceed to solve it?

Here’s the config file

source:
  device: /dev/video4

streammux:
  batch-size: 1
  batched-push-timeout: 40000
  width: 640
  height: 480
  live-source: 1
  gpu-id: 0
  enable-padding: 0
  nvbuf-memory-type: 0

nvtracker:
  tracker-width: 640
  tracker-height: 480
  gpu-id: 0
  ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
  # ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_IOU.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvSORT.yml
  ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
  # ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml

msgconv:
  #If you want to send images, please set the "payload-type: 1" and "msg2p-newapi: 1"
  payload-type: 1
  msg2p-newapi: 1
  frame-interval: 30
  # config: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/dstest4_msgconv_config.yml

msgbroker:
  proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  conn-str: localhost;9092
  topic: jackal-detection
  sync: 0

sink:
  sync: 0

# Inference using nvinfer:
primary-gie:
  plugin-type: 0
  config-file-path: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/enf_pgie_config.txt
  # config-file-path: dstest4_pgie_config.txt

and here’s the source code. I changed the extention to .txt so that I could upload it.
deepstream_test4_app.txt (37.8 KB)

To run it put into the deepstream-test4 folder, and then sudo make && ./deepstream-test4-app config.yml.

In case you’re wondering I started developing the solution from this working example:

gst-launch-1.0 v4l2src device=/dev/video4  !  'video/x-raw,format=YUY2,width=640,height=480,framerate=30/1'  ! nvvideoconvert  ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux name=mux width=640 height=480 batch-size=1 batched-push-timeout=33333 live-source=true ! nvvideoconvert ! nv3dsink

If you run deepstream-test4 with filesrc without any changes, will you have the same problem? First make sure the problem is not related to your camera.

Does this command line have the same problem?

Hello, both the deepstream-test4 and that command work fine without any warning or crash. I also used the same camera on multiple occasions with a config file (the one you run with deepstream-app -c config.txt), and it works fine too.

In case you’re interested, it’s a Intel RealSense d435i and this is the output of v4l2-ctl --list-formats-ext -d /dev/video4

ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 320x180
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 424x240
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 848x480
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 960x540
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)

Update

I used another Intel Realsense d435i and it gave the same warning and crashed with the same error.

I switched to a Trust usbcam, the warning disappeared, but after some time it crashed with the same error. I’m not sure what model it is as there’s no description whatsoever.

I went back to the Realsenses and the warnings disappeared here too, but the pipeline eventually crashes.

The error is

0:00:00.881945568  9210 0xaaaad28c4f00 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/concept/jackal-detection/src/detector.engine
0:00:00.909521280  9210 0xaaaad28c4f00 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test4/enf_pgie_config.txt sucessfully
Running...
mimetype is video/x-raw
NvMMLiteBlockCreate : Block : BlockType = 1 
0:01:20.003678080  9210 0xaaaad28e4580 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-nvinference-engine> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:01:20.003758656  9210 0xaaaad28e4580 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-nvinference-engine> error: Buffer conversion failed
ERROR from element primary-nvinference-engine: Buffer conversion failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1576): gst_nvinfer_process_full_frame (): /GstPipeline:dstest4-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
[WARN ] 2025-02-18 17:37:16 (cudaErrorIllegalAddress)
[ERROR] 2025-02-18 17:37:16 Error destroying cuda device: �O�x��
[WARN ] 2025-02-18 17:37:16 (cudaErrorIllegalAddress)
[WARN ] 2025-02-18 17:37:16 (cudaErrorIllegalAddress)
[...]
[ERROR] 2025-02-18 17:37:16 Error destroying cuda device: d��x��
[WARN ] 2025-02-18 17:37:16 (cudaErrorIllegalAddress)
terminate called after throwing an instance of 'nv::cuda::RuntimeException'
  what():  cudaErrorIllegalAddress: 
Aborted (core dumped)

Very strange
When you run deepstream-app, are you using the default model in the configuration file?

Also, how much memory and video memory does your model take up? I guess this may be due to insufficient resources.

Use the following command to view the current video memory usage

sudo pip3 install jetson-stats
jtop

No, I’m not using the default model. I fine-tuned a pre-trained Ultralytics YOLOv11 and used a custom inference.

Thanks for mentioning the memory, video memory and jtop. After some tests, I found out that closing Firefox was enough to fix the issue…

I eventually quantize to int8 in the near future to avoid similar problems. Thanks again!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.