Deepstream : Getting ISliceLayer has out of bounds access error with ONNX model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 2060
• DeepStream Version 5.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1.1
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Hello when I try to run a ONNX model in deepstream with 1 input video source, exported with batch = 5 from Pytorch I get following errors

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Slice_4: ISliceLayer has out of bounds access on axis 0
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: shapeMachine.cpp (202) - Shape Error in checkSlice: out of bounds access for slice
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Instruction: CHECK_SLICE 1 0 5 1
ERROR: nvdsinfer_backend.cpp:456 Failed to enqueue trt inference batch

If I provide 5 input source videos the application runs perfectly.
I am using deepstream-test3 as a reference application for the same.

if you want to run DS with dynamic batch, the onnx file needs to be exported with dynamic batch, i.e. value of batch dim is -1.
Then, in DS gie config, “batch-size” config is the max batch it can support.

I tried with batch as -1 but. It says it does not accept negative values.

export PYTHONPATH=“$PWD” && python3 ./models/export_onnx.py --weights ./models/yolov5s.pt --img 640 --batch -1
Namespace(batch_size=-1, img_size=[640, 640], weights=‘./models/yolov5s.pt’)
Traceback (most recent call last):
File “./models/export_onnx.py”, line 28, in
img = torch.zeros((opt.batch_size, 3, *opt.img_size)) # image size(1,3,320,192) iDetection
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 3, 640, 640]

  1. About error : Trying to create tensor with negative dimension -1 , you may could search the solution in Google
  2. Or you can refer to TensorRT/ONNX - eLinux.org to modify your model to be dymanic batch

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.