High inference time when using tensorrt in jetson nano

Converted keras model to frozen graph (.pb) and used it for trt inference.

my code,

trt_graph = trt.create_inference_graph(
input_graph_def=graph_def,
outputs=outputs,
max_batch_size=12,
max_workspace_size_bytes=1 << 25,
precision_mode=“FP16”)

what are the files needed to troubleshoot? how can I provide them?

Hi,

What is your use case?

We have TFTRT tutorial for object detection and image classification use case.
Would you mind to check the tutorial first?
https://github.com/tensorflow/tensorrt/tree/master/tftrt/examples

Thanks.

Hi,

My use case is to detect face and estimate age,gender & emotion. It is not available in those examples.

Hi,

The case is a detector with several classifiers.
Here is a similar pipeline and you can start from it with your own model:
https://developer.nvidia.com/deepstream-sdk

Thanks.

Sathiez were you able to get gender and age on jetson nano working ?