Hello!
I’m trying to run PeopleSegNet on a Jetson Nano following the steps from NVIDIA-AI-IOT/deepstream_tlt_apps github:
https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps
I have installed Deepstream 5.1, Jetpack 4.5.1 and TensorRT 7.1.3.0
When I run the model the following output appears on the console:
mero@Jetson-HC02:~/deepstream_tlt_apps$ ./apps/ds-tlt -c configs/peopleSegNet_tlt/pgie_peopleSegNet_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264
Now playing: configs/peopleSegNet_tlt/pgie_peopleSegNet_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:01.679826913 27052 0x55a5b60440 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: generate_detections: Unsupported operation _GenerateDetection_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:09.284932172 27052 0x55a5b60440 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
I hope we can find a solution.
Thank you!