• Hardware Platform : Jetson Nano • DeepStream Version : 5.1 • JetPack Version : 4.5 • TensorRT Version : 7.1.3.0
Currently, I am working on deepstream python apps. I want to Interface the image with the deepstream python code.
While reading through Nvidia Forums and Blogs I found I can use " /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-image-decode-test" this for images in deepstream. But I want to run it using python. So Is there any proper documentation or any other solutions are there, so I can look into it.
I also want to do inference with image in Deepstream with python. Can you provide me pipeline for image inference with saving image in the end? I can’t find any example with saving image.
I prepared pipeline based on deepstream-image-decode-test and it looks like this: filesrc -> jpegparse -> nvv412decoder -> nvstreammux -> pgie -> nvvideoconvert -> jpegenc -> filesink but it stucks and doesn’t return any error. This is all output:
Unknown or legacy key specified 'is-classifier' for group [property]
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:12.689411459 426 0x3af1c70 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/yolov4_face_new.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x608x608
1 OUTPUT kFLOAT boxes 22743x1x4
2 OUTPUT kFLOAT confs 22743x1
0:00:12.689488763 426 0x3af1c70 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/yolov4_face_new.engine
0:00:12.747416626 426 0x3af1c70 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config_infer_primary_yoloV4_face.txt sucessfully
This is my script yolo_image.py (5.7 KB)
Can you tell me where I do a mistake? I use Deepstream 5.1 and T4 card.
Thanks for your reply. I have almost working solution but I have weird bug now. In my pipeline I modify frame after inference in videotemplate plugin. When I run pipeline with image with resolution 1280x720 and size 107kB everything works fine but when I run it with image with resolution 1280x720 and size 0,97 MB I get error: (python3:453): GStreamer-CRITICAL **: 08:06:27.709: gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed and I found out that it happen with that function: cv::Mat in_mat = cv::Mat (in_surf->surfaceList[frame_meta->batch_id].planeParams.height[0], in_surf->surfaceList[frame_meta->batch_id].planeParams.width[0], CV_8UC4, in_surf->surfaceList[frame_meta->batch_id].mappedAddr.addr[0], in_surf->surfaceList[frame_meta->batch_id].planeParams.pitch[0]);. Any idea how to fix this issue?
Hi, I partly solved this issue. I was getting this issue when I resized image from resolution 4032x2268 to 1280x720 but when I resized this image to 1920x1080 the issue was gone. So I have no idea why it happens but with bigger resolution it works for now.