I’ve kind of been going around in circles and would really appreciate some guidance. I would like to deploy a yolov3 model on a TX2. I have a .trt serialized engine but I am struggling to load it in C++ and run inference. I came across deepstream but it seems to only be for video…is there a way to use deepstream to run inference on a single image from a .trt engine file?
Additionally, if there are other methods to run inference on a .trt file, I’d really appreciate if you could direct me to it.
• Hardware Platform (Jetson / GPU): Jetson TX2
• DeepStream Version: 4 (it’s the one that came with Jetpack 4.3)
• JetPack Version (valid for Jetson only): 4.3
• TensorRT Version: 6.0.1