Hi Morgan, I still have trouble running
dewei@dewei-desktop:~/Documents/deepstream_tlt_apps$ ./deepstream-custom -c pgie_ssd_tlt_config.txt -i sample_720p.h264
Now playing: pgie_ssd_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.215516495 12311 0x5578f942f0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: UFF buffer empty
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.364630606 12311 0x5578f942f0 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
I don’t know why UFF buffer is empty. My TensorRT is internally installed with Jetpack in Xavier and I followed the link
deepstream_tao_apps/README.md at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to set up the environment, as I described it in another topic in TLT forum.