Why do i get a wrong predition running the model outside of deepstream?

Please provide complete information as applicable to your setup.

• GTX 1070
• DeepStream 6.2
• TensorRT 8
• NVIDIA GPU Driver Version 525
• questions

I am using the VehicleMakeNet and VehicleTypeNet models from: deepstream_reference_apps/deepstream_app_tao_configs at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

Their config files are this:

[property]

gpu-id=0

net-scale-factor=1

offsets=124;117;104

tlt-model-key=tlt_encode

tlt-encoded-model=../../models/tao_pretrained_models/vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt

labelfile-path=labels_vehiclemakenet.txt

int8-calib-file=../../models/tao_pretrained_models/vehiclemakenet/vehiclemakenet_int8.txt

model-engine-file=../../models/tao_pretrained_models/vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt_b4_gpu0_int8.engine

input-dims=3;224;224;0

uff-input-blob-name=input_1

batch-size=4

process-mode=2

model-color-format=0

## 0=FP32, 1=INT8, 2=FP16 mode

network-mode=1

network-type=1

num-detected-classes=4

interval=0

gie-unique-id=1

output-blob-names=predictions/Softmax

classifier-threshold=0.2

Also they have the .caffemodel files, so I use opencv to load their caffemodel files. Here is my code:

import sys
import cv2
import numpy as np
from scipy.special import softmax



model = cv2.dnn.readNetFromCaffe("resnet18.prototxt", "resnet18.caffemodel")

img = cv2.imread("test2.jpg")
img = cv2.resize(img , (224, 224))
mean_values = cv2.imread("mean.ppm")
img = cv2.subtract(img, mean_values)

img_blob = cv2.dnn.blobFromImage(img)

model.setInput(img_blob)
output = model.forward()
print(output)

labels_filename = 'labels.txt'
labels = np.loadtxt(labels_filename, str, delimiter=';')
print(labels)

probs = softmax(output, axis=1)
pred_label = labels[np.argmax(probs, axis=1)]
print (probs)
print (pred_label)

however, I did all the tests with more than 10 cars. It always output the wrong restults. I checked the file “mean.ppm”, it is the same value as “offsets=124;117;104” in the deepstream config file. Why can’t I get the correct result by using opencv to call caffe model directly?

The resnet18 model in our sample is trained and pruned with Nvidia TAO toolkit( TAO Toolkit | NVIDIA Developer), it may not be the same model you use in your script. Please use your model according to how it is trained.

Follow your suggestion, I found the etlt version of this model.

And I used tao-converter to convert it.

This is my command: tao-converter -d 3,224,224 -e vehiclemakenet/resnet18_vehiclemakenet.engine -t int8 -k tlt_encode -m 4 vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt -c vehiclemakenet/vehiclemakenet_int8.txt

but I got an error:

[ERROR] 3: conv1/convolution:kernel weights has count 2352 but 175616 was expected
[ERROR] 4: conv1/convolution: count of 2352 weights in kernel, but kernel dimensions (7,7) with 224 input channels, 16 output channels and 1 groups were specified. Expected Weights count is 224 * 7*7 * 16 / 1 = 175616
[ERROR] 4: [convolutionNode.cpp::computeOutputExtents::43] Error Code 4: Internal Error (conv1/convolution: number of kernel weights does not match tensor dimensions)

how should i do to improve it?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can use “./tao-converter -h” to get the usage.

tao-converter -d 3,224,224 -e vehiclemakenet/resnet18_vehiclemakenet.engine -t int8 -k tlt_encode -b 4 -c vehiclemakenet/vehiclemakenet_int8.txt vehiclemakenet/resnet18_vehiclemakenet_pruned.etlt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.