Inference Issues with Mobilenet_v2 Custom Training Model Using TAO Toolkit in DeepStream

• Hardware - A2000
• Network Type (Classification) - Mobilenet_v2
• Docker container name - (nvcr.io/nvidia/tao/tao-toolkit:5.3.0-deploy)
• How to reproduce the issue ?

I have trained an image classification model using TensorFlow, and the network type is Mobilenet_v2 using nvidia-tao toolkit. I was able to get the TLT files and convert them into ONNX files using the command:

tao model classification_tf2 train -e /workspace/tao-experiments/specs/spec.yaml --gpus 1

tao model classification_tf2 export -e /workspace/tao-experiments/specs/spec.yaml --gpus 1

I also generated the TRT engine file using the command:

tao deploy classification_tf2 gen_trt_engine -e /workspace/tao-experiments/specs/spec.yaml

spec.txt (1.6 KB)

When I try to perform inference in DeepStream using the generated engine file, I am getting a classifier meta as None. I have attached the SGIE config file and the DeepStream code that I have used. Can anyone help resolve this issue?

dstest2_sgie1_config.txt (2.2 KB)

deepstream_test_2.txt (10.7 KB)

Do you get the correct result when you run tao deploy classification_tf2 inference xxx?

myoutput.txt (2.7 MB)
I am attaching the log file from when I ran tao deploy classification_tf2 gen_trt_engine -e /workspace/tao-experiments/specs/spec.yaml

I mean could you please run inference with tao deploy classification_tf2 inference xxx?
To check if the inference result is correct.

When I try to run tao deploy classification_tf2 inference, I get the following error.

Can you share the command and full log?

command : tao deploy classification_tf2 inference -e /workspace/tao-experiments/specs/spec.yaml

logs file inference_log.txt (5.2 KB)

spec file : spec.txt (1.6 KB)

File "<frozen inferencer.trt_inferencer>", line 29, in load_engine
TypeError: expected str, bytes or os.PathLike object, not NoneType

Please check the engine path.

In your spec file, it is

inference:
  checkpoint: /workspace/tao-experiments/results/mobilenet_v2_fp16.engine

Please change to

inference:
  checkpoint: '/workspace/tao-experiments/results/mobilenet_v2_fp16.engine'

I have changed the inference checkpoint path.

spec file : spec.txt (1.6 KB)

still getting the same error while inference
inference_log.txt (5.4 KB)

Please check if the engine is available.
tao deploy classification_tf2 run ls /workspace/tao-experiments/results/mobilenet_v2_fp16.engine

I have engine file available with the same name and path.

Please check it the engine is not zero file size.
More, you can debug inside the docker directly .
$ tao classfication_tf1 run /bin/bash
Then, please add debug code in /usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/classification_tf1/inferencer.py (i.e., tao_deploy/nvidia_tao_deploy/cv/classification_tf1/inferencer.py at main · NVIDIA/tao_deploy · GitHub)

Then,
$ classification_tf2 inference xxx

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.