i would like know if there is any ready-to-run example/project with a two-outputs (two output braches ) caffe-trained classification model running on nano with TensorRT ! I have successfully created the TRT engine for my model (with outputs “prob1” and “prob2”) but i cannot ported to the pre-compiled Googlenet example from jetson-inference repository since this example uses a pre-complied “trtNet” that only has supports a single classification output… Do i need to alter the trtNet.cpp and compile the whole project from scrtach? Any points on what to do as a quick proof of concept ?
thanx, any how!
It’s recommended to check detection related sample since it usually has multiple output for location and confidence.
thanx! i will try that !