Driveworks sample_dnn_tensor : How can I regenerate engine file?

Description

I try to run Driveworks 3.5 sample application “sample_dnn_tensor” on my custom board with Xavier. it says " INVALID_CONFIG: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."

I had a look similar forums and they all say I need to rebuild engine file with “tensorRT_optimization” tool. By the way, nobody says where are uff or caffe or onnx files to run “tensorRT_optimization”. I guess there should be corresponding model file for “sample_dnn_tensor” and other dnn sample application for driveworks 3.5.

Does anybody know?

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.
https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh
Thanks!

Hello,

Thanks for feedback. Do I get suitable model file for drivework application “sample_dnn_tensor” if I follow steps in instruction?

Or should I design my own network and export to onnx and convert to engine plan file and change application code to fit with my engine file?

What I am expecting is : Changing no code in sample application(Because it is from NVIDIA) and regenerating engine file (normally named “tensorRT_model.bin” and located in “/usr/local/driveworks-3.5/data/samples/detector/volta-integrated/” because application is proven and NN is also proven.

Hi @youngsouck.eun,

Above links are for installing/setting up tensorrt. You need to design or get model and we can convert it to tensorrt engine file. With TensorRT, you can optimize inference for neural network models.
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html

If you need further assistance related to driveworks(how to get model, etc.), you may get better help here.

Thank you.