• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type(questions, new requirements, bugs) questions
I am using nvinferserver to infer deepstream pipeline which consists of two models (PeopleNet + Classifer). I am using deepstream-test3 as a template and I am modifying the code to make it work.
Currently, there’s a config file for PeopleNet: config_triton_infer_primary_peoplenet.txt. And a config file for the classifier model which is: config_triton_infer_primary_agegender.txt. Also, a config file for the models for the triton server.
Here’s the pipeline:
Here’s how the pipeline is linked:
sinkpad1 = container.get_request_pad(“video_0”)
if not sinkpad1:
sys.stderr.write(" Unable to get the sink pad of qtmux \n")
srcpad1 = codeparser.get_static_pad(“src”)
if not srcpad1:
sys.stderr.write(" Unable to get mpeg4 parse src pad \n")
The pipeline works i.e. no issues with model loading and running the pipeline. But I am not able to get the classifier results. I don’t really know where they should be saved.
Do I need another probe function for sgie other than the one used to pgie? And if yes, that probe function should be dealing with a specific object (detected by PeopleNet) not the whole frame, right?
You can find the config files attached.
config_triton_infer_primary_agegender.txt (1.3 KB)
config_triton_infer_primary_peoplenet.txt (2.0 KB)
config.txt (375 Bytes)
Your efforts are appreciated.
1- Yes the detector works fine and the boundary boxes are drawn on the video. Where should the classifier results be saved?
2- What do you mean by asking if sgie works using third-party? How can I try something like this?
3- In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_CLIP_OBJECTS you mean? Since it is a sgie?
4- Won’t the default pre/post processing will be added if I didn’t add mine?
5- This example uses nvinfer not nvinferserver. Can you provide a sample using triton?
Thank you for your support.
1- Which metadata do you mean? obj_meta? Can I just use the probe function of deepstream-test2? I don’t want to draw the label yet, I just want to print the resultant vector.
2- Yes, I tested this model separately and it works fine. I added the preprocess, but I think the post-processing will be added inside the code.
3- Yes, it is. In your first response you said: “please understand the preprocess and postprocess parameters first. In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_FULL_FRAME”
So, should it be PROCESS_MODE_FULL_FRAME or PROCESS_MODE_CLIP_OBJECTS. In the sample you mentioned it is PROCESS_MODE_CLIP_OBJECTS.
4- I just followed it to write my preprocess.
Another question, do I need a tracker to use sgie? does sgie work on the tracking ID given by the tracker?
yes, NvDsClassifierMeta, which is in NvDsObjectMeta. yes, please refer to pgie_src_pad_buffer_probe of deepstream_3d_action_recognition.cpp for how to access NvDsClassifierMeta.
2. please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_secondary_plan_engine_carcolor.txt.
if your model needs a sequence of pictures, tracker is needed, otherwise not.
I could fix the issues you mentioned, but now I get a different error.
Do you have any idea why this happened?
could you share the current config_triton_infer_primary_agegender.txt?
please add postprocess configuration. please refer to deepstream-infer-tensor-meta-test\inferserver\dstensor_sgie1_config.txt for “output_tensor_meta: true” sample, please refer to deepstream-test2\dstest2_sgie1_nvinferserver_config.txt for “output_tensor_meta: false” sample.
Okay, I’ll add the postprocess config.
I am sorry but, I don’t even know what is the difference between setting it to true or false.
Okay, it is finally working!!
I had some issues that wouldn’t be solved without your support Fanzh!
1- My model was expecting 1 image while there were 4 objects detected. So, I had to convert age/gender model to work on a bigger batch size.
2- Added the postprocessing in config.
3- Creating a probe function for the tiler component not the sgie it self.
Thank you one more time.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.