[TLT] can't run inference with the model on deepstream

0:00:09.661153687 9586 0x55555f287180 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Output error: Output conv2d_cov/Sigmoid not found
parseModel: Failed to parse UFF model

Error shown as above, what is the issue here?

Hi, Request you to share your model and script, so that we can help you better.

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Hi @justinchiyeekwan,

This doesn’t look like TensorRT issue. Could you please post your query on related forum.

Thank you.