CUDA Errors on custom model

running Deepstream YOLOV3 I am able to run the demonstration but when I replace the standard yolov3.cfg and yolov3.weights with a custom model it fails with the below errors. I tried googling these errors but don’t get a lot of useful information.

Is anyone familiar with this error and can provide some guidance as to where I can begin?

Thank you,

DougM

Creating LL OSD context new
0:08:56.327632826  8259     0x2dfc9a30 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:dequeueOutputBatch(): Failed to synchronize on cuda event (cudaErrorUnknown)
0:08:56.327704766  8259     0x2dfc9a30 WARN                 nvinfer gstnvinfer.cpp:1861:gst_nvinfer_output_loop:<primary_gie_classifier> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
ERROR from primary_gie_classifier: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1861): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
0:08:56.327638139  8259     0x2e011e80 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to add cudaStream callback for returning input buffers (cudaErrorUnknown)
0:08:56.329546327  8259     0x2e011e80 WARN                 nvinfer gstnvinfer.cpp:1098:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
0:08:56.329758785  8259     0x2e011e80 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:queueInputBatch(): Failed to make stream wait on event(cudaErrorUnknown)
0:08:56.329906984  8259     0x2e011e80 WARN                 nvinfer gstnvinfer.cpp:1098:gst_nvinfer_input_queue_loop:<primary_gie_classifier> error: Failed to queue input batch for inferencing
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1098): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
ERROR from primary_gie_classifier: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1098): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Quitting

Hi,
Please refer to following document:
https://docs.nvidia.com/metropolis/deepstream/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf

That’s the ticket!

Thanks!!

DougM