Is .h5 file supported in deepstream 6.0?

Please provide complete information as applicable to your setup.

• Hardware Platform -GPU
• DeepStream Version - 6.0
**NVIDIA GPU Driver Version- 495.29.05 **
is .h5 file supported in deepstream 6.0?

Config
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
#custom-network-config=yolov3.cfg
model-file=yolov3.h5
#model-file=yolov3.weights
#model-engine-file=yolov3-tiny_b1_gpu0_fp32.engine
labelfile-path=labels.txt

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=80
gie-unique-id=1
network-type=0
is-classifier=0

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

Error
0:00:00.227767141 1096282 0x2841720 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
Yolo type is not defined from config file name:
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:724 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:00.495924744 1096282 0x2841720 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:00.495969280 1096282 0x2841720 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:00.495979690 1096282 0x2841720 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:00.496279037 1096282 0x2841720 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.496289527 1096282 0x2841720 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:

No. The *.h5 file is Keras model which can be used by TAO tool for training. It is not for deepstream. Please refer to TAO Toolkit | NVIDIA Developer

Thanks for reply.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.