Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Orin Nano 4 GB (SeeedStudio) • DeepStream Version. Deepstream 7.0 • JetPack Version (valid for Jetson only). Jetpack 6.0 • TensorRT Version. 8.6.2
deepstream pipeline we have is
→ nvstreammux → nvinfer (pgie yolov8 pose) → tracker → nvinfer (sgie, custom model action recognition) → nvstreamdemux →
I am able to add yolov8 pose model in the pipeline and get the keypoints. Now
I am trying to add custom classification model as SGIE and get classified label. The problem am facing is while passing metadata input into SGIE in deepstream pipeline
The output of PGIE yolov8 pose model keypoints need to be processed and data keypoints should be passed as input into SGIE for classification.
The SGIE model takes input in size of [30,34] (a sequence of 30 frames and 34 keypoints)
Guide me on how to send the processed data as input to SGIE model
thanks @Fiona.Chen this helped us and we have started integrating Gst-nvdspreprocess in the pipeline.
But after we integrated “Gst-nvdspreprocess” plugin to the pipeline, the code is crashing with segmentation fault.
At the same time we see a log in the dmesg.
[ 8190.287391] nvmap_alloc_handle: PID 4546: python: WARNING: All NvMap Allocations must have a tag to identify the subsystem allocating memory.Please pass the tag to the API call NvRmMemHanldeAllocAttr() or relevant.
I am not sure what is causing the crash. I observe the custom library (set by “custom-lib-path”) is getting loaded as we see log print from the “initLib” function.
But after the function “CustomTensorPreparation()” in the custom library gets executed (set by “custom-tensor-preparation-function”) the pipeline crashes. I kept this function empty, still the pipeline crashes.
It is a low level log, nobody can tell anything with just this log.
Have you implemented your own customized “CustomTensorPreparation” and “CustomTransformation” in your own “/home/fastedge/fastedge/deepstream-app-people-tracking/deepstream-app/libcustom2d_preprocess.so”? Or it is just a copy of the default “/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so” in DeepStream SDK?
We are still using the yolov8 pose detection. The pipeline after adding the nvdspreprocess looks like below
→ nvstreammux → nvinfer (pgie yolov8 pose) → tracker → nvdspreprocess → nvstreamdemux →
I am yet to add nvinfer (sgie, custom model action recognition) model in the pipeline. thought will do step by step.
libcustom2d_preprocess.so contains blank functions. I have attached the source code of the same. It is taken from the github link you have shared.
I am looking to parse the keypoints from mask_params in NvDsObjectMeta, consolidate, prepare and pass on to the next secondary nvinfer plugin. I think I don’t require all the configuration parameter in configuration file.
I am unable to run the pipeline with the library that contains blank function. I am not sure adding the full functionality in the custom library and adding sgie with the model configuration will fix the segmentation fault.
Do you mean the yolov8 pose model is a segmentation model? Do you have the algorithm of getting body key points from the mask?
Which configuration file do you mean?
You mustn’t apply the blank function with the sample’s nvdspreprocess configuration file, the blank functions can’t generate the tensor data as configured, the buffer read/write will crash. The customization is not only the functions but also the configurations. Please start with the gst-nvdspreprocess plugin source code to understand how the nvdspreprocess library works.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks