Image inference in Deepstream Python

Hello,

• Hardware Platform : Jetson Nano
• DeepStream Version : 5.1
• JetPack Version : 4.5
• TensorRT Version : 7.1.3.0

Currently, I am working on deepstream python apps. I want to Interface the image with the deepstream python code.

While reading through Nvidia Forums and Blogs I found I can use " /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-image-decode-test" this for images in deepstream. But I want to run it using python. So Is there any proper documentation or any other solutions are there, so I can look into it.

Thank you
Viraj Hapaliya

1 Like

Please refer to deepstream python documentation,

dowload python code, there python sample deepstream-imagedata-multistream for how to access images.

Thank you.

Hello,

Actually I am working on this deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub for Other project where I am using RTSP Stream as input. And it’s working fine.

After your Reply I have tried with the image but it’s not working.

Command:
python3 deepstream_imagedata-multistream.py images/image_001.jpg frames

And I got this Error:

Error: gst-resource-error-quark: Invalid URI “cropped/images_263.jpg”. (3): gsturidecodebin.c(1384): gen_source_element (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin

Thank you

python3 deepstream_imagedata-multistream.py file:///path to your source/

Hello,

Sorry, Yes I forgot to use file:///path to your source/ this. So now I have change it to the whole path and command is like this.

python3 deepstream_imagedata-multistream.py file:///home/amnt/Work/deepstream_python_apps/apps/deepstream-imagedata-multistream/cam_0.jpg frames

and after this it throws this error:

In cb_newpad

Error: gst-stream-error-quark: Internal data stream error. (1): gsttypefindelement.c(1236): gst_type_find_element_loop (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
Exiting app

Thank you

For image decode, you can refer to deepstream-image-decode-test

Yes, I have used deepstream-image-decode-test for Image. But how should I run using Python?

If you need jpg source input, you can refer to deepstream-image-decode-test, and do some customization in deepstream-imagedata-multistream

1 Like

Hello,

I also want to do inference with image in Deepstream with python. Can you provide me pipeline for image inference with saving image in the end? I can’t find any example with saving image.

Thanks

deepstream-imagedata-multistream

I prepared pipeline based on deepstream-image-decode-test and it looks like this: filesrc -> jpegparse -> nvv412decoder -> nvstreammux -> pgie -> nvvideoconvert -> jpegenc -> filesink but it stucks and doesn’t return any error. This is all output:

Unknown or legacy key specified 'is-classifier' for group [property]
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:12.689411459   426      0x3af1c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/yolov4_face_new.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608
1   OUTPUT kFLOAT boxes           22743x1x4
2   OUTPUT kFLOAT confs           22743x1

0:00:12.689488763   426      0x3af1c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/yolov4_face_new.engine
0:00:12.747416626   426      0x3af1c70 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:config_infer_primary_yoloV4_face.txt sucessfully

This is my script yolo_image.py (5.7 KB)
Can you tell me where I do a mistake? I use Deepstream 5.1 and T4 card.

Please add jpegparse between jpegenc and filesink.

Thanks for your reply. I have almost working solution but I have weird bug now. In my pipeline I modify frame after inference in videotemplate plugin. When I run pipeline with image with resolution 1280x720 and size 107kB everything works fine but when I run it with image with resolution 1280x720 and size 0,97 MB I get error: (python3:453): GStreamer-CRITICAL **: 08:06:27.709: gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed and I found out that it happen with that function: cv::Mat in_mat = cv::Mat (in_surf->surfaceList[frame_meta->batch_id].planeParams.height[0], in_surf->surfaceList[frame_meta->batch_id].planeParams.width[0], CV_8UC4, in_surf->surfaceList[frame_meta->batch_id].mappedAddr.addr[0], in_surf->surfaceList[frame_meta->batch_id].planeParams.pitch[0]);. Any idea how to fix this issue?

Sorry for the late.
Did you solve the issue? if not do you have the issue without frame modification after inference in videotemplate plugin?

Hi, I partly solved this issue. I was getting this issue when I resized image from resolution 4032x2268 to 1280x720 but when I resized this image to 1920x1080 the issue was gone. So I have no idea why it happens but with bigger resolution it works for now.