• Hardware Platform (Jetson / GPU)
Jetson Orin Nano 8GB
• DeepStream Version
7.1
• JetPack Version (valid for Jetson only)
6.1
• TensorRT Version
10.3
Hello,
I’ve made a pipeline starting from the available example deepstream-test4
, and changed the source so that I could use my USB cam, a Intel Realsense d435i, and now it looks like this
v4l2src -> caps_v4l2src -> nvvidconvsrc -> caps_nvvidconv -> nvstreammux -> ...
(the rest is unchanged)
Surprisingly it works but it gives out these warnings:
0:00:00.660332064 3145434 0xaaab0d480c60 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/concept/jackal-detection/src/detector.engine
0:00:00.668705408 3145434 0xaaab0d480c60 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test4/enf_pgie_config.txt sucessfully
Running...
mimetype is video/x-raw
0:00:00.693660896 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:842:gst_v4l2_buffer_pool_start:<usb-cam-source:pool0:src> Uncertain or not enough buffers, enabling copy threshold
0:00:00.700636672 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:1373:gst_v4l2_buffer_pool_dqbuf:<usb-cam-source:pool0:src> Driver should never set v4l2_buffer.field to ANY
0:00:00.700692736 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700740896 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700765248 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:00.700786432 3145434 0xaaab0d46ea40 WARN v4l2bufferpool gstv4l2bufferpool.c:2222:gst_v4l2_buffer_pool_process:<usb-cam-source:pool0:src> Dropping truncated buffer, this is likely a driver bug.
0:00:01.846046848 3145434 0xaaab0d46ea40 WARN v4l2src gstv4l2src.c:1123:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 10 - ts: 0:00:01.080397960
0:00:01.903180928 3145434 0xaaab0d46ea40 WARN v4l2src gstv4l2src.c:1123:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 1 - ts: 0:00:01.180462472
After some minutes it crashes:
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cuda Driver (an illegal memory access was encountered)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:06:20.180545376 3151068 0xaaaadf69a980 WARN nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:dstest4-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
0:06:20.189456960 3151068 0xaaaadf69ad80 ERROR nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-nvinference-engine> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:06:20.189505792 3151068 0xaaaadf69ad80 WARN nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-nvinference-engine> error: Buffer conversion failed
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
GPUassert: an illegal memory access was encountered /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvmultiobjecttracker/src/modules/cuDCFv2/cuDCFFrameTransformTexture.cu 693
!![Exception] GPUassert failed
An exception occurred. GPUassert failed
gstnvtracker: Low-level tracker lib returned error 1
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[ERROR] 2025-02-17 14:33:57 Error destroying cuda device: !"�J��
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[...]
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[ERROR] 2025-02-17 14:33:57 Error destroying cuda device: y��J��
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
[...]
[WARN ] 2025-02-17 14:33:57 (cudaErrorIllegalAddress)
terminate called after throwing an instance of 'nv::cuda::RuntimeException'
what(): cudaErrorIllegalAddress:
Aborted (core dumped)
Any idea on how to proceed to solve it?
Here’s the config file
source:
device: /dev/video4
streammux:
batch-size: 1
batched-push-timeout: 40000
width: 640
height: 480
live-source: 1
gpu-id: 0
enable-padding: 0
nvbuf-memory-type: 0
nvtracker:
tracker-width: 640
tracker-height: 480
gpu-id: 0
ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
msgconv:
#If you want to send images, please set the "payload-type: 1" and "msg2p-newapi: 1"
payload-type: 1
msg2p-newapi: 1
frame-interval: 30
# config: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/dstest4_msgconv_config.yml
msgbroker:
proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str: localhost;9092
topic: jackal-detection
sync: 0
sink:
sync: 0
# Inference using nvinfer:
primary-gie:
plugin-type: 0
config-file-path: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/enf_pgie_config.txt
# config-file-path: dstest4_pgie_config.txt
and here’s the source code. I changed the extention to .txt so that I could upload it.
deepstream_test4_app.txt (37.8 KB)
To run it put into the deepstream-test4
folder, and then sudo make && ./deepstream-test4-app config.yml
.
In case you’re wondering I started developing the solution from this working example:
gst-launch-1.0 v4l2src device=/dev/video4 ! 'video/x-raw,format=YUY2,width=640,height=480,framerate=30/1' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux name=mux width=640 height=480 batch-size=1 batched-push-timeout=33333 live-source=true ! nvvideoconvert ! nv3dsink