• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090 • DeepStream Version 6.3 • JetPack Version (valid for Jetson only) • TensorRT Version 8.4.0 • NVIDIA GPU Driver Version (valid for GPU only) 535.113.01 • Issue Type(questions, new requirements, bugs) questions
Hello,
I am using nvinferserver to infer deepstream pipeline which consists of two models (PeopleNet + Classifer). I am using deepstream-test3 as a template and I am modifying the code to make it work.
Currently, there’s a config file for PeopleNet: config_triton_infer_primary_peoplenet.txt. And a config file for the classifier model which is: config_triton_infer_primary_agegender.txt. Also, a config file for the models for the triton server.
Blockquote
streammux.link(queue1)
queue1.link(pgie)
pgie.link(tracker)
tracker.link(sgie)
sgie.link(queue2)
if nvdslogger:
queue2.link(nvdslogger)
nvdslogger.link(tiler)
else:
queue2.link(tiler)
tiler.link(queue3)
queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)
nvosd.link(queue5)
queue5.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(codeparser)
sinkpad1 = container.get_request_pad(“video_0”)
if not sinkpad1:
sys.stderr.write(" Unable to get the sink pad of qtmux \n")
srcpad1 = codeparser.get_static_pad(“src”)
if not srcpad1:
sys.stderr.write(" Unable to get mpeg4 parse src pad \n")
srcpad1.link(sinkpad1)
container.link(sink)
The pipeline works i.e. no issues with model loading and running the pipeline. But I am not able to get the classifier results. I don’t really know where they should be saved.
Do I need another probe function for sgie other than the one used to pgie? And if yes, that probe function should be dealing with a specific object (detected by PeopleNet) not the whole frame, right?
can the detector model work? can you see the bounding boxes? if you can see the bboxes and there are no classifier results, it should be sgie’s configuration problem.
can the sgie work if using third-party tool? please understand the preprocess and postprocess parameters first. In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_FULL_FRAME, and please add preprocess and postprocss configurations. please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_secondary_plan_engine_carcolor.txt, which is a sgie classifier model.
the classifier results will be saved to metadata, and osd will draw it to bbox label, you can run deepstream-test2 to check.
how did you get this model? how did you know this model is fine if no test tool? if test tool or code works, please understand how to set the preprocess and postprocess parameters.
agegender is sgie.
please refer to opt\nvidia\deepstream\deepstream-6.3\samples\configs\deepstream-app-triton\config_infer_secondary_plan_engine_carcolor.txt to write your own configuration.
1- Which metadata do you mean? obj_meta? Can I just use the probe function of deepstream-test2? I don’t want to draw the label yet, I just want to print the resultant vector.
2- Yes, I tested this model separately and it works fine. I added the preprocess, but I think the post-processing will be added inside the code.
3- Yes, it is. In your first response you said: “please understand the preprocess and postprocess parameters first. In config_triton_infer_primary_agegender.txt, process_mode should be PROCESS_MODE_FULL_FRAME”
So, should it be PROCESS_MODE_FULL_FRAME or PROCESS_MODE_CLIP_OBJECTS. In the sample you mentioned it is PROCESS_MODE_CLIP_OBJECTS.
4- I just followed it to write my preprocess.
Another question, do I need a tracker to use sgie? does sgie work on the tracking ID given by the tracker?
yes, NvDsClassifierMeta, which is in NvDsObjectMeta. yes, please refer to pgie_src_pad_buffer_probe of deepstream_3d_action_recognition.cpp for how to access NvDsClassifierMeta.
2. please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_secondary_plan_engine_carcolor.txt.
if your model needs a sequence of pictures, tracker is needed, otherwise not.
please add postprocess configuration. please refer to deepstream-infer-tensor-meta-test\inferserver\dstensor_sgie1_config.txt for “output_tensor_meta: true” sample, please refer to deepstream-test2\dstest2_sgie1_nvinferserver_config.txt for “output_tensor_meta: false” sample.