Best accuracy model for the Jetson Xavier

Hi everyone,

After working with the TensortRT optimized version of SSD and getting up to 160 FPS with my Xavier, I was wondering if there was a model, less faster but which could be more accurate. Let’s take an example: If I want my script to detect objets like boats or person , and I want this algorithm to be very accurate and to run at 25 -30 FPS ( speed of any commercial webcam for example), what should I do to achieve it ? I was thinking about training my own model but with which framework ? Already used Darknet and TF to do training but for the moment, even with TensortRT optimized YOLO version, my Xavier cannot do more than 22 FPS with YOlov3-TRT. So should I train with tensorflow or do you know any existing algorithm which could fill my needs ( ~30 FPS with good mAP) ?

Thanks in advance for taking time reading this, have a nice day :)

Hi,

You can find several object detection models with mAP value in the TensorFlow Github:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models

They also have performance score so you can compare to your model for the approximate fps on Xavier.

For existing sample, we have SSD, fasterRCNN, and YOLOv3 model.
The performance on Xaiver is 257.6FPS, 26.54FPS and 22.4FPS with FP16 mode.

Thanks.

Thanks for your reponse :) I will stay with SSD inception for the moment I guess :) Does NVIDIA provides any sort of tutorial to make my own FP16 engine with my own dataset based on SSD ?

Have a good day

Hi,

We have a transfer learning toolkit (TLT) that may meet your requirement:

For SSD detector, we support ResNet10/18 for feature extraction.
You can find the pretrained model in our NGC cloud:

Thanks.

Hi,

I saw TLT this summer and worked a bit with it but if I remember well, Is is not possible to integrate TLT pipeline in a C++ or python code to use it’s detection for further application ? Correct me if I’m wrong that would be very interesting to work with it on the Xavier platform.

Have a nice day :)

Hi,

A general workflow is

1. Re-train the model with transfer learning toolkit.

2. Execute the model with Deepstream SDK (C++ & python):
Here is a sample for your reference:
https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps#deepstream-configuration-file

Thanks.

Hi, I was wondering why SSD was so much faster than YOLO with TRT ? Can you help me finding an answer because it is quite hard to find why on the internet ?

Thanks for your quick and helpfull answers, have a nice day :)

I found that TensorRT is not supporting some of layers and operations done in the Yolo algorithm, I guess it is linked.