I try to run Driveworks 3.5 sample application “sample_dnn_tensor” on my custom board with Xavier. it says " INVALID_CONFIG: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."
I had a look similar forums and they all say I need to rebuild engine file with “tensorRT_optimization” tool. By the way, nobody says where are uff or caffe or onnx files to run “tensorRT_optimization”. I guess there should be corresponding model file for “sample_dnn_tensor” and other dnn sample application for driveworks 3.5.
Does anybody know?
Nvidia Driver Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered
Please refer to the installation steps from the below link if in case you are missing on anything
However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.
In order to run python sample, make sure TRT python packages are installed while using NGC container.
Thanks for feedback. Do I get suitable model file for drivework application “sample_dnn_tensor” if I follow steps in instruction?
Or should I design my own network and export to onnx and convert to engine plan file and change application code to fit with my engine file?
What I am expecting is : Changing no code in sample application(Because it is from NVIDIA) and regenerating engine file (normally named “tensorRT_model.bin” and located in “/usr/local/driveworks-3.5/data/samples/detector/volta-integrated/” because application is proven and NN is also proven.
Above links are for installing/setting up tensorrt. You need to design or get model and we can convert it to tensorrt engine file. With TensorRT, you can optimize inference for neural network models.
If you need further assistance related to driveworks(how to get model, etc.), you may get better help here.