Deepstream results are different on PC and nanoboard.

I retrained secondary gie but classification result is not much better than it was in PC.

PC classification accuracy : 80%
jetson nano classification accuracy : 10%

Do you know reason?

Below is my config file

--------video_config----------
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=2
num-sources=1
#uri=rtsp://root:root@10.10.100.101:554/cam0_0
uri=file:/home/hana/config/Primary_Detector_Nano/test1.avi
#uri=file:/home/hana/config/Primary_Detector_Nano/bus/917.jpg
gpu-id=0
cudadec-memtype=0
#camera-width=1280
#camera-height=720
#camera-fps-n=30
#camera-fps-d=1

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=1
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=3
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400

[osd]
enable=1
border-width=0
text-size=0
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=-1

Set muxer output width and height

width=544
height=304
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properqties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
model-engine-file=Primary_Detector_Nano/resnet18.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary_nano(khy).txt

[secondary-gie0]
enable=1
model-engine-file=Primary_Detector_Nano/resnet10.engine
gpu-id=0
batch-size=1
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes(khy).txt

[tracker]
enable=0
tracker-width=128
tracker-height=128
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1
gie-unique-id=2

[tests]
file-loop=1

---------secondary config-------------
[property]
gpu-id=0
net-scale-factor=1
#model-file=…/…/models/Secondary_VehicleTypes/resnet18.caffemodel
#proto-file=…/…/models/Secondary_VehicleTypes/resnet18.prototxt
model-engine-file=Primary_Detector_Nano/resnet10.engine
int8-calib-file=Primary_Detector_Nano/cal.bin
#mean-file=Secondary_VehicleTypes/mean.ppm
labelfile-path=Primary_Detector_Nano/labels(sec).txt
batch-size=1
model-color-format=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
is-classifier=1
#process-mode=2
#network-type=1
#interval=0
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.51
input-object-min-width=64
input-object-min-height=64
operate-on-gie-id=1
operate-on-class-ids=0

HI
Can you specify detailed what’s the target for classification good or bad? and can you paste your classification result both for PC and nano?

It is currently detect vehicles and classify car truck buses using secondary gie.

Below is link to the result

test.mp4 : Nano result
test(pc).csv : Pc result

https://drive.google.com/open?id=1L2i2bS5006Vp73_ZUBTmTuHvP0-k9uyX

Also can you tell how you retrain the model? and how you test the accuracy?

Primary gie trained with resnet18, Secondary gie trained with resnet10.

The links below are my resources for this.

https://drive.google.com/open?id=1L2i2bS5006Vp73_ZUBTmTuHvP0-k9uyX

Move this topic from DS forum into TLT forum.

Hi loveme1492,
Which network did you train resnet10 and resnet18? Detectnet_v2 ?

How did you generate the resnet10.engine and resnet18.engine?
Please note that tlt-converter is different between PC and Jetson-platform. There are two versions.

Hi loveme1492,

We haven’t heard back from you in a couple weeks, so marking this topic closed.
Please open a new forum issue when you are ready and we’ll pick it up there.