I have the following working pipeline that currently ingests a 1080p video stream and successfully detects, then displays bounding boxes w/ a TAO Toolkit-trained MaskRCNN model (to clarify, I do not want to show segmentations, but only bounding boxes):
I noted that I’m using nvinferserver, and not invinfer. Then, I adapted some changes accordingly to accommodate using nvdspreprocess w/ nvinferserver as so:
Given that I’ve already added parameters related to my model in the nvdspreprocess settings based on the preprocess settings previously used in nvinferserver, I’m not sure why no detections are being made at all.
nvinfer successfuly displays bounding boxes with or without nvdspreprocess
nvinferserver, unfortunately, can only display bounding boxes without nvdspreprocess
nvinferserver fails to display bounding boxes the moment nvdspreprocess element is used with it
Given that I need to display bounding boxes using nvinferserver with nvdspreprocess, like in scenarios (d) and (e), were you able to reproduce the same issue I faced?
For your settings in deepstream_app_source1_segmentation.txt, under [primary-gie], I see a reference to “triton/config_infer_primary_peopleSegNet.txt”, which has the following content:
Given that I want to use nvdspreprocess’ outputs with nvinferserver, doesn’t this mean that the output from nvdspreprocess is not used?
As I understand (and correct me if I’m wrong) from: Gst-nvinferserver — DeepStream documentation 6.4 documentation, if I want to use nvdspreprocess’ output w/ nvinferserver, I need to disable nvinferserver’s preprocess by replacing it with the equivalent code instead, which means:
so that nvdspreprocess’ output can be used, instead of relying on nvdsinferserver’s preprocess settings.
This is necessary for me because I intend to tile the input video stream into multiple smaller ROIs for inference.
From what I understand (and correct me if I’m wrong), nvinferserver’s “preprocess” config doesn’t support this, which means the only way to do this is for me to disable nvinferserver’s “preprocess” and leave the preprocessing to be done entirely through nvdspreprocess.
The “input-tensor-meta=1” is set in deepstream_app_source1_segmentation.txt too. This will change the input of gst-nvinferserver. deepstream-app and gst-nvinferserver are all open source, you can read the code to understand how it works.