I want to run action recognition model (TSN) on jetson xavier nx using tensorrt, and i have convert the model to onnx format and engine format, what is the next step to deploy and inference, is there some example for reference ? Thanks !
You can do it either using C++ or Python API’s.
It seems that there is really no related project/blog to read, the official doc is not intuitive enough for a novice like me, but anyway i will spend time to get familiar with it, thanks for your information !
Hi, i notice that in addition to the method introduced in the official document to do inference, we can also use tensorrt lite Engine to inference (related acticle), so do you have more information about its usage ? I am curious about their differences ~
Hi @wade.wang ,
TensorRT Lite is an high level C++ and Python library for TensorRT that allows you to create and validate TensorRT engine easily. TensorRT Lite could build TensorRT engines using simple flags, load compiled custom plugins using filepath, and calibrate INT8 inference using custom data streams.