- Hardware Platform:
- Deepstream Version:
- JetPack Version:
- TensorRT Version:
- TLT Version:
I am using TLT to train a detection model (DetectNetV2 ResNet18) on a custom object class, that is then deployed within a Deepstream app based on the python examples.
I noticed that when I train the TLT model and deploy it on the nano in the Deepstream app, I see a considerable drop in detection accuracy. Below is what I see and the options I have explored:
- TLT model converted to TRT engine file, and then run on the nano in the DS app:
- The TLT model shows a mAP of 90% after training, but when deployed as a TRT engine file on the nano the performance drops. Samples that were correctly inferred in the TLT model are no longer inferred correctly when I used the converted TRT engine file.
- What could be causing this? Do I have to change any specific configurations in the Deepstream app / pgie files to ensure the accuracy is maintained from TLT to TRT?
- I’m currently using a pre-cluster-threshold of 0.2 in my pgie configuration file
- TLT model run as ETLT model on the nano:
- The TLT model when run directly as an ETLT model on the nano did not make any detections at all with the same threshold configuration as above. Is there a specific way to implement this?
If not either of these options, is there something else I should try to ensure accuracy is maintained when running a TLT model in a DS app?