Custom Yolov4 inference failed on jetson Nano


i have successfully trained my custom yolov4 model using TLT according to tlt_cv_samples_v1 yolo_v4.ipynb , and the map result was good.
the inference / visualization test using tlt result also good. I use resnet10 for this training with 5 classes.

the problem arise when i deploy it on jetson Nano 2GB using deepstream.
i already use nvdsinfer_custom_impl_Yolo from GitHub - NVIDIA-AI-IOT/yolov4_deepstream

the error occurred in NvDsInferParseCustomYoloV4 () at this point :
assert(boxes.inferDims.numDims == 3)

error log :

deepstream-opencv-test: nvdsparsebbox_Yolo.cpp:139: bool NvDsInferParseCustomYoloV4(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&): Assertion `boxes.inferDims.numDims == 3’ failed.

below is my setup:

• Hardware Platform (Jetson / GPU) = Jetson Nano 2GB
• DeepStream Version = deepstream 5.1
• JetPack Version = 4.5.1
• TensorRT Version = 7.1.3
• CUDA version = 10.2

Thank you


Could you check the below GitHub for TLT-based YOLOv4 model instead:


1 Like


i tried your link and successfully running my yolov4 TLT models…

thanks !

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.