Hi, I was running the sample app in this git repo. I ran the sample config with the given pretrained model on Xavier successfully. But when I move my customized tlt 2.0 trained model to the sample app and ran the same command
dewei@dewei-desktop:~/deepstream_tlt_apps$ ./deepstream-custom -c pgie_yolo_tlt_config.txt -i /home/dewei/deepstream_tlt_apps/outfile.h264 -b 1 -d
Now playing: pgie_yolo_tlt_config.txt
as the given config and model, I got stuck with the following error message:
Using winsys: x11
Opening in BLOCKING MODE
0:00:00.211150899 24086 0x55876bd4c0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.526612825 24086 0x55876bd4c0 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Bus error (core dumped)
I attached my configyolo_labels.txt (136 Bytes) and label.txt below. Thanks.pgie_yolo_tlt_config.txt (2.6 KB)
I cannot upload my custom tlt trained model because it’s not allowed by the forum system.