I’d like to share our new work CenterNet-HarDNet85 which achieves 42.5 COCO mAP and nearly realtime on AGX Xavier (with JetPack 4.3, MAX-N mode). The repo is HERE.
We have converted the PyTorch model into TRT through torch2trt with FP16 mode and the current FPS is around 21 FPS (512x512, network inference time only). We wanted to know is there anything else that we can do to further improve the inference speed? Please give us some advice and you are also very welcome to contribute to this repo.
Also, we have encountered some issues on JetPack 4.4 while converting trt model. If anyone knows how to solve it, please also share with us. Thank you very much!
TensorRT Version: JetPack 4.3
GPU Type: NVIDIA AGX Xavier
Nvidia Driver Version: JetPack 4.3
CUDA Version: JetPack 4.3
CUDNN Version: JetPack 4.3
Operating System + Version: JetPack 4.3
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.5.0
Baremetal or Container (if container which image + tag):
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered