Detectnet_v2 - Message type "Inference" has no field named "evaluation_config"

Hi, a question - I’m trying to run inference using tao-deploy for the detectnet_v2 model using an engine file. This gives the following error:


Traceback (most recent call last):
File “</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py>”, line 3, in

google.protobuf.text_format.ParseError: 1:1 : Message type “Inference” has no field named “evaluation_config”.

What would be really helpful is if the “Inference” message definition is printed. Now I have to guess if perhaps the documentation that I am following is out-of-date, or there might be a bug, or I am still doing something wrong after trying for one hour. I tried finding the source code to see what is going on - but the pip package nvidia-tao-deploy is obfuscated! Why? This makes it really hard to tackle the issue without your help :-/

I can provide you with the full details like the spec file and everything, but I would rather be able to solve this issue myself.

Q1 - Can you provide the nvidia-tao-deploy without obfuscation?
If not, can you provide the “Inference” message proto?
If not, what do you suggest?

I am afraid there is something mismatching in the inference spec file.
Please refer to the notebook and find the detectnet_v2’s spec file and its .ipynb file.

You can download via

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-getting-started/versions/4.0.12/zip -O getting_started_v4.0.2.zip
unzip -u getting_started_v4.0.2.zip  -d ./getting_started_v4.0.2 && rm -rf getting_started_v4.0.2.zip && cd ./getting_started_v4.0.2

Thank you for the fast reply.

I am basing my config on:
https://docs.nvidia.com/tao/tao-toolkit/text/tao_deploy/detectnet_v2.html#running-inference-through-tensorrt-engine

The example you show does not use tao-deploy with a .engine file but the tao command with the .tlt file.

However:

  1. Using the documentation I was able to create a new spec file and run inference.
  2. Given the new spec file this also works for the tao-deploy command

Conclusion:
The documentation that I was following is incorrect. The correct syntax is in the provided examples.

Glad to know it is working now. The link you mentioned above is for evaluation instead of running inference.

Yes, please refer to the inference spec file in the notebook.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.