Hardware :- Jetson Nano
CUDA :- 10.2.300
cuDNN :- 8.2.1.3
TRT :- 8.0.1.6
Jetpack :- 4.6
Im getting this error while I run pipeline.
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Opening in BLOCKING MODE
0:00:05.082479335 2825 0x559d782c10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/nano/silpa/troisai-wms2.0/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output 25607
0:00:05.082641059 2825 0x559d782c10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/nano/silpa/troisai-wms2.0/model_b1_gpu0_fp32.engine
0:00:05.119537916 2825 0x559d782c10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.636076943 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:custom_model_pipeline/GstNvInfer:primary-nvinference-engine
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Returned, stopping playback
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.644459989 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.652174942 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.659513740 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
Deleting pipeline