Illegal memory access was encountered while running Deepstream

Hello all,

Here is my setup

• Hardware Platform GPU
• Docker image: nvcr.io/nvidia/deepstream:6.0-devel
**• DeepStream 6.0 **
• TensorRT 8.0.1-1
• NVIDIA GPU Driver Version 11.3

I have been running some tests using the docker container for Deepstream 6.0, I am mainly using the example models that come with Deepstream, however when I was using an input image of 1713x1221 I got the following message:

gst-launch-1.0 filesrc location= raw_test_images/test_jpeg/ped_sj.jpeg ! jpegdec ! videoconvert ! nvvideoconvert nvbuf-memory-type=3 ! ‘video/x-raw(memory:NVMM), width=1713, height=1221’ ! m.sink_0 nvstreammux name=m batch-size=2 width=1713 height=1221 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary_nano.txt ! fakesink

(gst-plugin-scanner:13): GStreamer-WARNING **: 08:23:19.153: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:13): GStreamer-WARNING **: 08:23:19.182: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so’: librivermax.so.0: cannot open shared object file: No such file or directory
Setting pipeline to PAUSED …
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine open error
0:00:00.761627849 12 0x556227c8dd90 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine failed
0:00:00.761665229 12 0x556227c8dd90 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine failed, try rebuild
0:00:00.761673913 12 0x556227c8dd90 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1224 FP16 not supported by platform. Using FP32 mode.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:06.916392885 12 0x556227c8dd90 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp32.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x272x480
1 OUTPUT kFLOAT conv2d_bbox 16x17x30
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30

0:00:06.920323919 12 0x556227c8dd90 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary_nano.txt sucessfully
Pipeline is PREROLLING …
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Rounding muxer output width to the next multiple of 8: 1720
Additional debug info:
gstnvstreammux.c(2795): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:m
WARNING: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Rounding muxer output height to the next multiple of 4: 1224
Additional debug info:
gstnvstreammux.c(2803): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:m
Cuda failure: status=700
Error(-1) in buffer allocation

** (gst-launch-1.0:12): CRITICAL **: 08:23:25.720: gst_nvds_buffer_pool_alloc_buffer: assertion ‘mem’ failed
ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Failed to allocate the buffers inside the Nvstreammux output pool
Additional debug info:
gstnvstreammux.c(791): gst_nvstreammux_alloc_output_buffers (): /GstPipeline:pipeline0/GstNvStreamMux:m
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
ERROR: [TRT]: 1: [hardwareContext.cpp::terminateCommonContext::141] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
WARNING: [TRT]: Unable to determine GPU memory usage
WARNING: [TRT]: Unable to determine GPU memory usage
ERROR: [TRT]: [defaultAllocator.cpp::free::85] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaStream::455] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
ERROR: [TRT]: [resources.cpp::~ScopedCudaEvent::438] Error Code 1: Cuda Runtime (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaRuntimeError’
what(): an illegal memory access was encountered
Aborted (core dumped)

This means that the input images/videos to Deepstream should have dimensions multiple of 8 (when I test with an image of 664x448 I don’t see this issue) ?

I didn’t found any reference to this in the Documentation.

I did an additional test with the pipeline reaching nvvideoconvert:

gst-launch-1.0 filesrc location= raw_test_images/test_jpeg/ped_sj.jpeg ! jpegdec ! videoconvert ! nvvideoconvert nvbuf-memory-type=3 ! ‘video/x-raw(memory:NVMM), width=1713, height=1221’ ! fakesink
Setting pipeline to PAUSED …
Pipeline is PREROLLING …
Pipeline is PREROLLED …
Setting pipeline to PLAYING …
New clock: GstSystemClock
Got EOS from element “pipeline0”.
Execution ended after 0:00:00.000097610
Setting pipeline to PAUSED …
Setting pipeline to READY …
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Setting pipeline to NULL …
Freeing pipeline …

I am not sure if this is related to the problem with nvstreammux.

Don’t use DS6 and 6.0.1, a lot of bugs and they didn’t want to help you when you really stuck at such issues…

Hi, @cleram Sorry for late reply.
Could you give us the sample image you tested(ped_sj.jpeg) by the google driver?
Also we have released deepstream version 6.1 just now, could you use docker nvcr.io/nvidia/deepstream:6.1-devel image to run the same pipeline and see if there is the same issue?
Thanks

Hi @yuweiw, next week I’ll try with the new release of deepstream, in the meanwhile here is the image I was using for this pipeline:

OK,I lunch the pipeline with your image in deepstream version 6.1 in my env. It works well and have no memory issue. So look forward to your reply next week.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.