HI,
I am working on TensorRT model on DeepStream. WhenI executed deepstream app script. It is showing buffer conversion failed. I used different input format video files like mp4 and mjpeg. When I execute the script with mp4 input file, it is showing buffer conversion failed.
for mp4 format
siddhu2041@linux:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo$ deepstream-app -c deepstream_app_config_yoloV4.txt
Unknown or legacy key specified ‘is-classifier’ for group [property]
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:04.968266885 7762 0xe4f5a10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/human_detection/custom-yolov4-tiny_human1-608.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT 000_net 3x608x608
1 OUTPUT kFLOAT 030_convolutional 18x19x19
2 OUTPUT kFLOAT 037_convolutional 18x38x38
0:00:04.968498969 7762 0xe4f5a10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/human_detection/custom-yolov4-tiny_human1-608.engine
0:00:04.985290989 7762 0xe4f5a10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/config_infer_primary_yoloV4.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
** INFO: <bus_callback:181>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
0:00:05.153740568 7762 0xdf798f0 ERROR nvinfer gstnvinfer.cpp:1111:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorInvalidValue while converting buffer
0:00:05.153855933 7762 0xdf798f0 WARN nvinfer gstnvinfer.cpp:1372:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
ERROR from primary_gie: Buffer conversion failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1372): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
App run failed
Below is the Screenshot of the error.
When I execute the script with mjpeg input file,It is showing internal data stream error.
siddhu2041@linux:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo$ deepstream-app -c deepstream_app_config_yoloV4.txt
Unknown or legacy key specified ‘is-classifier’ for group [property]
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:04.905972196 7869 0x3be9da10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/human_detection/custom-yolov4-tiny_human1-608.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT 000_net 3x608x608
1 OUTPUT kFLOAT 030_convolutional 18x19x19
2 OUTPUT kFLOAT 037_convolutional 18x38x38
0:00:04.906157303 7869 0x3be9da10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/human_detection/custom-yolov4-tiny_human1-608.engine
0:00:04.923534193 7869 0x3be9da10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/config_infer_primary_yoloV4.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
** INFO: <bus_callback:181>: Pipeline ready
** INFO: <bus_callback:167>: Pipeline running
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
nvbufsurface: memory type (3) not supported
Error(-1) in buffer allocation
** (deepstream-app:7869): CRITICAL **: 17:19:20.079: gst_nvds_buffer_pool_alloc_buffer: assertion ‘mem’ failed
ERROR from primary_gie_conv: failed to activate bufferpool
Debug info: gstbasetransform.c(1670): default_prepare_output_buffer (): /GstPipeline:pipeline/GstBin:primary_gie_bin/Gstnvvideoconvert:primary_gie_conv:
failed to activate bufferpool
ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1236): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason error (-5)
ERROR from queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstQueue:queue:
streaming stopped, reason error (-5)
Quitting
App run failed
Brlow is the screenshot of the error.
And when I use h264 input video file, It is showing both Buffer conversion failed error and Internal data stream error.
Please help me to sort out this error.