TAO YoloV4 16Bit grayscale model integration

Hello,

I try to switch my YoloV4 detection model from color to Grayscale.

Predictions resulkt from Tao training are ok :
Start to calculate AP for each class


cargo AP 0.64015
fishing AP 0.65369
kayak AP 0.51335
passenger AP 0.73766
pleasurecraft AP 0.83856
sailing AP 0.675
tanker AP 0.68877
mAP 0.67817

My Pipeline is Basic :
gst-launch-1.0 filesrc location=/home/tao/Videos/Montage.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/home/tao/Documents/Svn/DeepTrack/Configurations/Boat/Day/V7/InferenceBoat.txt ! nvvideoconvert ! nvdsosd ! nveglglessink

My InferenceBoat.txt file :
[property]
gpu-id=0
net-scale-factor=1.0
offsets=30048.9216
model-color-format=2
labelfile-path=labels.txt
model-engine-file=yolov4_resnet18_epoch_080.onnx_b1_gpu0_fp32.engine
tlt-encoded-model=yolov4_resnet18_epoch_080.onnx
tlt-model-key=nvidia_tlt
infer-dims=1;1024;1344
maintain-aspect-ratio=0
uff-input-order=0
uff-input-blob-name=Input
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=0
gie-unique-id=1
is-classifier=1
#network-type=0

Set to NMS

cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=libnvds_infercustomparser.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=8
detected-min-h=4
detected-max-w=2400
detected-max-h=1800

is set model-color-format to 2.
I assume nvfiner do the conversion to Grayscale 16 bits before inference (is in documentation).
infer-dims is ok model was training with 1344*1024 as size.
But nothing is detected when i run inference on video ?

Where i can find example configuration file for yolov4 16bit grayscale ?

Hi @lecuyer1,

I think this post might be better suited for the TAO toolkit community than in this generic category.

I hope it is ok if I move it over there?

Thanks!

To narrow down, could you please run yolo_v4 inference firstly to check if the inference results are fine? Refer to YOLOv4 - NVIDIA Docs .

Ok no problème to move topics.
Best regards

As mentioned above, please run yolo_v4 inference firstly to check if the inference results are fine.

Ok i’m going to try tao inference, Can i used RGB picture or i need to convert it to grayscale 16 bits same as training sample data ?

Please try to run inference with the same data as training.

Ok i try n’est week with Kitty_split grayscale folder.
Tao yolov4 16 bit grayscale jupyter notebook does’nt propose it same as yolov4 jupyter notebook (RGB). I try to adapt command Line.

Feedback next week
Best regards.

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks