• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
I am using TLT to train new model base on backbone model resnet18 using yolov4 network .
For trainning model, I followed step by step of yolov4 sameples, and it is ok until "yolo_v4 inference "
The problem occured when I tried to push etlt model to deepstream app:
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.104041869 31910 0x7f30002330 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Bus error (core dumped)
I saw some note that " To integrate a model trained by TLT with DeepStream, you shoud generate a device-specific optimized TensorRT engine using tlt-converter . The generated TensorRT engine file can then be ingested by DeepStream (Currently, YOLOv4 etlt files are not supported by DeepStream). " => is it the root cause?
I have tried it, but the same error occured:
ERROR: Deserialize engine failed because file path: /home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine open error
0:00:02.620954195 31193 0x559c656870 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine failed
0:00:02.621178678 31193 0x559c656870 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:02.621216566 31193 0x559c656870 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
But it seems to be that I have to use TensorRT and tlt-convert since the note in Readme said that. But I don’t know how to run tlt-convert in Jetson yet even I read the instruction
I tried to comment out # model-engine-file=…/…/models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine
But the result is the same.
nvidia@nvidia-desktop:~/deepstream_tlt_apps/apps/tlt_detection$ ./ds-tlt-detection -c /home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 -b 2
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: /home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.431532902 11426 0x5582230c70 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.513344708 11426 0x5582230c70 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
This is the result:
nvidia@nvidia-desktop:~/deepstream_tlt_apps/apps/tlt_detection$ ./ds-tlt-detection -c /home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 -b 2
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: /home/nvidia/deepstream_tlt_apps/configs/yolov4_tlt/pgie_yolov4_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.402544942 12097 0x55b6fb8c70 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.399339990 12097 0x55b6fb8c70 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Bus error (core dumped)