I try to run Driveworks 3.5 sample application “sample_dnn_tensor” on my custom board with Xavier. it says " INVALID_CONFIG: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors."
I had a look similar forums and they all say I need to rebuild engine file with “tensorRT_optimization” tool. By the way, nobody says where are uff or caffe or onnx files to run “tensorRT_optimization”. I guess there should be corresponding model file for “sample_dnn_tensor” and other dnn sample application for driveworks 3.5.
Does anybody know?
Environment
TensorRT Version: GPU Type: Nvidia Driver Version: CUDA Version: CUDNN Version: Operating System + Version: Python Version (if applicable): TensorFlow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Thanks for feedback. Do I get suitable model file for drivework application “sample_dnn_tensor” if I follow steps in instruction?
Or should I design my own network and export to onnx and convert to engine plan file and change application code to fit with my engine file?
What I am expecting is : Changing no code in sample application(Because it is from NVIDIA) and regenerating engine file (normally named “tensorRT_model.bin” and located in “/usr/local/driveworks-3.5/data/samples/detector/volta-integrated/” because application is proven and NN is also proven.
Above links are for installing/setting up tensorrt. You need to design or get model and we can convert it to tensorrt engine file. With TensorRT, you can optimize inference for neural network models.
If you need further assistance related to driveworks(how to get model, etc.), you may get better help here.