Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.5
• NVIDIA GPU Driver Version (valid for GPU only) 511.65
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I was testing the deepstream python example deepstream-ssd-parser inside docker nvcr.io/nvidia/deepstream:6.1-triton
.
When using the triton server inside the container everything works fine. But as soon as I am switching to an external triton server nvcr.io/nvidia/tritonserver:22.06-py3
the results from the model contain errors. Specifically the first 4 entries in every model output layer are wrong. To show them I printed every output inside make_nodi()
from ssd_parser.py.
print("Confidence", pyds.get_detections(score_layer.buffer, index))
For the changes in dstest_ssd_nopostprocess.txt, I replaced the model_repo
entry with
grpc {
url: "0.0.0.0:8001"
}
The output is as follows
Confidence -1.4424493044793344e+17
Confidence 4.5596850730665223e-41
Confidence 6.191364830612113e+26
Confidence 4.559825202912955e-41
Confidence 0.5849516987800598
Confidence 0.4417745769023895
Confidence 0.41973045468330383
Frame Number=0 Number of Objects=7 Vehicle_count=0 Person_count=0
Confidence -1.4435625600024576e+17
Confidence 4.5596850730665223e-41
Confidence 6.191364830612113e+26
Confidence 4.559825202912955e-41
Confidence 0.6589468121528625
Confidence 0.5667120814323425
Confidence 0.45494136214256287
Frame Number=1 Number of Objects=7 Vehicle_count=0 Person_count=0
Confidence -1.4436422745954714e+17
Confidence 4.5596850730665223e-41
Confidence 6.191364830612113e+26
Confidence 4.559825202912955e-41
Confidence 0.664429783821106
Confidence 0.5177415609359741
Frame Number=2 Number of Objects=6 Vehicle_count=0 Person_count=0
I am not quite sure what causes this or how it can be avoided. The same behavior also presents when testing with a custom pytorch model.