CUDA Error 222 While Converting File into TensorRT engine

I just cloned the tensorrtx/yolov5 GitHub repository and I am trying to run the model. I followed all the instructions for it, however, when I run the following command: “sudo ./yolov5_det -s yolov5s.wts yolov5s.engine s” which I believe is to convert the yolo weights into engine file, I am getting the following error:

[01/08/2024-17:51:37] [W] [TRT] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Loading weights: yolov5s.wts
CUDA error 222 at /home/nvidia/Documents/tensorrtx/yolov5/plugin/yololayer.cu:32yolov5_det: /home/nvidia/Documents/tensorrtx/yolov5/plugin/yololayer.cu:32: nvinfer1::YoloLayerPlugin::YoloLayerPlugin(int, int, int, int, bool, const std::vector&): Assertion `0’ failed.
Aborted

NOTE: I have installed everything needed, and when I run some of the projects using DeepStream 6.2 SDK everything run normally, I don’t get any CUDA Error message. I am using Jetson Orin NX, with Cuda 11.8 and 11.4 installed, TensorRT 8.5.2.2

Is there anything I should do? Any help?

If you do not mind onnx then,
Can you try converting the weights to onnx file first?
Then you can use onnx to create the tensorrt engile file.

Maybe this can help,