Nvinferserver with custom models

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type(questions, new requirements, bugs) questions

Hello,

I am using nvinferserver to infer deepstream pipeline which consists of two models (PeopleNet + Classifer). I am using deepstream-test3 as a template and I am modifying the code to make it work.

Currently, there’s a config file for PeopleNet: config_triton_infer_primary_peoplenet.txt. And a config file for the classifier model which is: config_triton_infer_primary_agegender.txt. Also, a config file for the models for the triton server.

Here’s the pipeline:

Blockquote
pipeline.add(pgie)
pipeline.add(tracker)
if nvdslogger:
pipeline.add(nvdslogger)
pipeline.add(sgie)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(nvvidconv2)
pipeline.add(capsfilter)
pipeline.add(encoder)
pipeline.add(codeparser)
pipeline.add(container)
pipeline.add(sink)

Here’s how the pipeline is linked:

Blockquote
streammux.link(queue1)
queue1.link(pgie)
pgie.link(tracker)
tracker.link(sgie)
sgie.link(queue2)
if nvdslogger:
queue2.link(nvdslogger)
nvdslogger.link(tiler)
else:
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
nvosd.link(queue5)
queue5.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(codeparser)
sinkpad1 = container.get_request_pad(“video_0”)
if not sinkpad1:
sys.stderr.write(" Unable to get the sink pad of qtmux \n")
srcpad1 = codeparser.get_static_pad(“src”)
if not srcpad1:
sys.stderr.write(" Unable to get mpeg4 parse src pad \n")
srcpad1.link(sinkpad1)
container.link(sink)

The pipeline works i.e. no issues with model loading and running the pipeline. But I am not able to get the classifier results. I don’t really know where they should be saved.

Do I need another probe function for sgie other than the one used to pgie? And if yes, that probe function should be dealing with a specific object (detected by PeopleNet) not the whole frame, right?

You can find the config files attached.
config_triton_infer_primary_agegender.txt (1.3 KB)
config_triton_infer_primary_peoplenet.txt (2.0 KB)
config.txt (375 Bytes)

Your efforts are appreciated.

  1. can the detector model work? can you see the bounding boxes? if you can see the bboxes and there are no classifier results, it should be sgie’s configuration problem.
  2. can the sgie work if using third-party tool? please understand the preprocess and postprocess parameters first. In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_FULL_FRAME, and please add preprocess and postprocss configurations. please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_secondary_plan_engine_carcolor.txt, which is a sgie classifier model.
  3. here is a sample to use age_gender model.

1- Yes the detector works fine and the boundary boxes are drawn on the video. Where should the classifier results be saved?

2- What do you mean by asking if sgie works using third-party? How can I try something like this?

3- In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_CLIP_OBJECTS you mean? Since it is a sgie?

4- Won’t the default pre/post processing will be added if I didn’t add mine?

5- This example uses nvinfer not nvinferserver. Can you provide a sample using triton?

Thank you for your support.

  1. the classifier results will be saved to metadata, and osd will draw it to bbox label, you can run deepstream-test2 to check.
  2. how did you get this model? how did you know this model is fine if no test tool? if test tool or code works, please understand how to set the preprocess and postprocess parameters.
  3. agegender is sgie.
  4. please refer to opt\nvidia\deepstream\deepstream-6.3\samples\configs\deepstream-app-triton\config_infer_secondary_plan_engine_carcolor.txt to write your own configuration.
  5. currently there is no ready sample.

1- Which metadata do you mean? obj_meta? Can I just use the probe function of deepstream-test2? I don’t want to draw the label yet, I just want to print the resultant vector.

2- Yes, I tested this model separately and it works fine. I added the preprocess, but I think the post-processing will be added inside the code.

3- Yes, it is. In your first response you said: “please understand the preprocess and postprocess parameters first. In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_FULL_FRAME”
So, should it be PROCESS_MODE_FULL_FRAME or PROCESS_MODE_CLIP_OBJECTS. In the sample you mentioned it is PROCESS_MODE_CLIP_OBJECTS.

4- I just followed it to write my preprocess.

Another question, do I need a tracker to use sgie? does sgie work on the tracking ID given by the tracker?

yes, NvDsClassifierMeta, which is in NvDsObjectMeta. yes, please refer to pgie_src_pad_buffer_probe of deepstream_3d_action_recognition.cpp for how to access NvDsClassifierMeta.
2. please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_secondary_plan_engine_carcolor.txt.

if your model needs a sequence of pictures, tracker is needed, otherwise not.

Hello Fanzh,

I could fix the issues you mentioned, but now I get a different error.

Do you have any idea why this happened?

Thanks

could you share the current config_triton_infer_primary_agegender.txt?

Sure!

infer_config {
  unique_id: 3
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    triton {
      model_name: "age_gender"
      version: -1
      model_repo {
        root: "/opt/nvidia/deepstream/deepstream-6.3/samples/triton_model_repo"
        strict_model_config: true
      }
    }
  }
preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    
  }


}
input_control {
  process_mode: PROCESS_MODE_CLIP_OBJECTS
  operate_on_gie_id: 1
  operate_on_class_ids: [0]
  interval: 0
  async_mode: true
}
output_control {
  output_tensor_meta: true
}


please add postprocess configuration. please refer to deepstream-infer-tensor-meta-test\inferserver\dstensor_sgie1_config.txt for “output_tensor_meta: true” sample, please refer to deepstream-test2\dstest2_sgie1_nvinferserver_config.txt for “output_tensor_meta: false” sample.

Okay, I’ll add the postprocess config.
I am sorry but, I don’t even know what is the difference between setting it to true or false.

Okay, it is finally working!!

I had some issues that wouldn’t be solved without your support Fanzh!

1- My model was expecting 1 image while there were 4 objects detected. So, I had to convert age/gender model to work on a bigger batch size.

2- Added the postprocessing in config.

3- Creating a probe function for the tiler component not the sgie it self.

Thank you one more time.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.