Failed to load custom model in detectnet-camera.py

when i load custom model as:
./detectnet-camera --width=640 --height=480 --camera=/dev/video0 --model=/home/fyp/maaz.uff --class_labels=/home/fyp/labels.txt

i received following:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera /dev/video0
[gstreamer] gstCamera pipeline string:
v4l2src device=/dev/video0 ! video/x-raw, width=(int)640, height=(int)480, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video0

detectnet-camera: successfully initialized camera device
width: 640
height: 480
depth: 24 (bpp)

detectNet – loading detection network model from:
– prototxt NULL
– model /home/fyp/maaz.uff
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /home/fyp/labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - UFF (extension ‘.uff’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/fyp/maaz.uff.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /home/fyp/jetson-inference/build/aarch64/bin/ /home/fyp/maaz.uff
[TRT] UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TRT] failed to parse UFF model ‘/home/fyp/maaz.uff’
[TRT] device GPU, failed to load /home/fyp/maaz.uff
detectNet – failed to initialize.
detectnet-camera: failed to load detectNet model

Any help will be appreciated.

Hi,

[TRT] UffParser: Validator error: ... Unsupported operation _FusedBatchNormV3

Please noticed that FusedBatchNormV3 is not supported in current TensorRT yet.
You can find our supported matrix here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html

As mentioned in topic 1071484, please try to serialize the ssd_mobilenetV2 model with a older TensorFlow frameworks and try it again.

Thanks.