Preprocessing steps for dashcamnet


I tried to infer on the Dashcam’s TensorRT engine (conveted using tlt-convert) by following the pre-processing steps documented in the gstinfer doc. However, the inferred results (floats) did not match the ones done through deepstream, or a modified version of deepstream-infer-tensor-meta-test. The pgie configs did not help (for Dashcam the net-scale-factor seems to be 1/255, and means the default values. What are the exact steps for preprocessing for the Dashcam model?


TensorRT Version:
GPU Type: GTX1070
Nvidia Driver Version:450.66
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Changed within the docker image:

Steps To Reproduce

The images used for both are the same
Pull and run the docker image: syther22/deepstream_dashcam_264 from
Run the container, run the deepstream-inter-tensor-meta-app
/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test# ./deepstream-infer-tensor-meta-app ./test.h264

  • you should see many floating point numbers, with the floating point inference results starting from 3.33143e-07 2.8675e-08 3.02065e-07 1.90778e-07 1.97347e-07 2.38013e-07…

Using tensorrt + cuda, the results of inference are different after following pre-processing steps: convert to RGB, y=netscalefactor*(x-mean) (eg. 1.33245e-06 7.46877e-07 1.81713e-06 1.9149e-06 1.86566e-06 2.54461e-06 5.59784e-06 8.99243e-06). I can send the source code for inference through a more direct channel.

Hi @syther666,
I believe Deepstream Forum will be able to help you better here, hence request you to raise the query in respective forum.