Thank you. Issue is solved. In model handler config of inference_kitti_etlt spec file, instead of “tlt_config” giving “tensorrt_config” and instead of “model”, giving “trt_engine” solved the issue. Might have edited it wrong some time back.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| TensorRT Inference Server rejecting valid trt.engine file generated by TLT | 0 | 699 | August 16, 2020 | |
| Can't load trt engine and throwing an instance of 'nvinfer1::MyelinError' | 17 | 2770 | October 12, 2021 | |
| Convert tensorrt engine from version 7 to 8 | 67 | 4461 | October 12, 2021 | |
| Error loading .trt model | 7 | 195 | November 6, 2024 | |
| Tlt-infer don't work with etlt model | 11 | 767 | October 12, 2021 | |
| TensorRT Inference error on Jetson nano | 28 | 2946 | February 1, 2022 | |
| Different between model.trt and model.engine | 3 | 2026 | November 17, 2021 | |
| Unused input error building TensorRT engine | 6 | 1397 | August 31, 2021 | |
| TensorRT Inference form a .etlt model on Python | 7 | 1245 | November 16, 2021 | |
| Inference error at engine.cpp::enqueue::293 | 4 | 2299 | January 31, 2019 |