Hello, I want to run custom yolo onnx models with deepstream , I am able to run the yolov3 pre-trained weights successfully with deepstream python app, could you please tell me how can i run other yolo models by using deepstream python samples. The config i used for yolov3 python app is attached below test_yolo.txt (3.6 KB)
.
You can implemete the demo with python. But the parts that involve the special post-processing of your model need to be implemented in C/C++, like the following parameters in your configuration file.
Thank you so much for your reply so i should implement my custom post processing functions in cpp in this file right “/opt/nvidia/deepstream/deepstream-6.4/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo”
Is it possible to do pre-processing and post-processing in python or we must depend on c++.
The preprocess is a plugin, you can just make a plugin and set some parameter to nvdspreprocess.
You can refer to the deepstream_ssd_parser.py to check how to use postprocess with python.
Thank you for your reply could I have tried to implement the custom post processing using python by referring to some sources from deepstream an ssd parser example but i am getting the following error and here is my config file and custom model codes in zip could you please help me to check and tell how to proceed. custom_model_code.zip (10.7 KB)
‘’‘’
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvTrackerParams::getConfigRoot()] !!![WARNING] Empty config file path is provided. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:04.023761729 149788 0xa45d590 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/DeepStream-Yolo-Seg/model_extra.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 9
0 INPUT kFLOAT images 3x640x640
1 INPUT kINT32 max_output_boxes_per_class 0
2 INPUT kFLOAT iou_threshold 0
3 INPUT kFLOAT score_threshold 0
4 OUTPUT kINT32 valid 0
5 OUTPUT kFLOAT rois 4
6 OUTPUT kFLOAT scores 0
7 OUTPUT kINT32 class_ids 0
8 OUTPUT kFLOAT masks 160x160
0:00:04.180381456 149788 0xa45d590 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/DeepStream-Yolo-Seg/model_extra.engine
0:00:04.183548843 149788 0xa45d590 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:871> [UID = 1]: Custom parse function not found for InstanceSegment-postprocessor
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_RESOURCE_ERROR
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_RESOURCE_ERROR
0:00:04.297145407 149788 0xa45d590 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:04.297185664 149788 0xa45d590 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: custom_models.txt, NvDsInfer Error: NVDSINFER_RESOURCE_ERROR
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: custom_models.txt, NvDsInfer Error: NVDSINFER_RESOURCE_ERROR
Exiting app
if i dont use python for processing also not working nvinfer is failed to allocate buffers, atleast if inference is success the we can do postprocess either in python or c++ based on output, can you please help me to check the below error
Linking elements in the Pipeline
object_detector_improvement_medicine.py:534: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
loop = GObject.MainLoop()
Now playing…
1 : rtsp://solomon:solomon888@10.1.2.124:88/videoMain
Starting pipeline
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.2/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvTrackerParams::getConfigRoot()] !!![WARNING] Empty config file path is provided. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:06.165117675 306577 0x184ad190 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/DeepStream-Yolo-Seg/medicine_model.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 6
0 INPUT kFLOAT images 3x480x640
1 OUTPUT kINT32 valid 0
2 OUTPUT kFLOAT rois 4
3 OUTPUT kFLOAT scores 0
4 OUTPUT kINT32 class_ids 0
5 OUTPUT kFLOAT masks 120x160
0:00:06.324601056 306577 0x184ad190 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_python_apps/apps/DeepStream-Yolo-Seg/medicine_model.engine
0:00:06.328148297 306577 0x184ad190 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::allocateBuffers() <nvdsinfer_context_impl.cpp:1437> [UID = 1]: Failed to allocate cuda output buffer during context initialization
0:00:06.328188682 306577 0x184ad190 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1289> [UID = 1]: Failed to allocate buffers
0:00:06.345444133 306577 0x184ad190 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:06.345486983 306577 0x184ad190 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV8_seg_medicine.txt, NvDsInfer Error: NVDSINFER_CUDA_ERROR
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: config_infer_primary_yoloV8_seg_medicine.txt, NvDsInfer Error: NVDSINFER_CUDA_ERROR
Exiting app
I added some logs but those logs are not printing after compiling also but from my understanding the models which are having dynamic shapes are not working but the models with static shapes are working how can i handle if model output shapes are dynamic. My problem is similar to the link that i shared below but i dont have pth model to add extra dimesions.
Could you ask the customer about how to add an extra dimension to the scores, boxes, and labels in pytorch on that topic? We don’t have a lot of experience with the pytorch.
We can support the dynamic shapes. But some output layer of your model is 0. You can go to our TAO forum and ask how to generate a model that we can support. Thanks