Cannot find any whl file in zip file of TensorRT 8.0.3.4 for windows

I cannot find any whl file for the following step for TensorRT installation with zip file

 5. Install one of the TensorRT Python wheel files from <installpath>/python:
 python.exe -m pip install tensorrt-*-cp3x-none-win_amd64.whl

Please let me know where I can download the whl file.

Environment

TensorRT Version: 8.0.3.4
GPU Type: RTX 3070
Nvidia Driver Version: 471.21
CUDA Version: 11.3
CUDNN Version: 8.2.1
Operating System + Version: Windows 10
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

TensorRT-8.0.3.4.Windows10.x86_64.cuda-11.3.cudnn8.2.zip

Hi @wrecker5,

Could you please give us more details. Have you tried running above command along with the previous steps as mentioned in the installation guide. Are you facing some errors ?

Thank you.

Yes I did. I followed and executed all of steps before step 5.

  1. Download the TensorRT zip file that matches the Windows version you are using.
  2. Choose where you want to install TensorRT. The zip file will install everything into a subdirectory called TensorRT-8.x.x.x. This new subdirectory will be referred to as in the steps below.
  3. Unzip the TensorRT-8.x.x.x.Windows10.x86_64.cuda-x.x.cudnn8.x.zip file to the location that you chose. Where:
  • 8.x.x.x is your TensorRT version
  • cuda-x.x is CUDA version 10.2 or 11.3
  • cudnn8.x is cuDNN version 8.2
  1. Add the TensorRT library files to your system PATH. There are two ways to accomplish this task:
  2. Leave the DLL files where they were unzipped and add /lib to your system PATH. You can add a new path to your system PATH using the steps below.
    1. Press the Windows key and search for “environment variables” which should present you with the option Edit the system environment variables and click it.
    2. Click Environment Variables… at the bottom of the window.
    3. Under System variables , select Path and click Edit… .
    4. Click either New or Browse to add a new item that contains /lib.
    5. Continue to click OK until all the newly opened windows are closed.
    6. If your cuDNN libraries were not copied to the CUDA installation directory and instead left where they were unzipped, then repeat the above steps for the cuDNNbin directory.
  3. Copy the DLL files from /lib to your CUDA installation directory, for example, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin, where vX.Y is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.

Actually, all of the steps before 5 is just unzip, set path and copy and paste some dll files. I cannot find the python folder in zip file which has the whl file.

Hi @wrecker5,

Python is not yet supported on Windows for TRT 8.0. This will be supported in upcoming releases. Please skip 5th step.

Thank you.

1 Like

OK. Thank you.

After Skipping step 5, I just build and run sampleMNIST as a sample to check verifying my installation. (I just follow installation guide.)

However, I found some messages which should be fixed since it makes very long latency to execute the model. (for Mask RCNN model, the latency was extremely increased.)

(1) “Detected invalid timing cache, setup a local cache instead”
(2) “Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.”

The second one can be removed by modifying as below,

in sampleMNIST.cpp
config->setMaxWorkspaceSize(16_MiB); 16 changed to 1024

for Mask RCNN case, the first one makes delay around 10 minutes.

and I cannot solve (2) with change “config->setMaxWorkspaceSize” (I was changed to 6GiB)

Also, I couldn’t find the way to solve (1). is there any solutons?

Hi,

These are not errors, shouldn’t cause the problem. Please refer Developer Guide :: NVIDIA Deep Learning TensorRT Documentation for more detials.

Regarding the workspace issue, could you please share issue repro onnx model/script and provide verbose logs to try from our end for better debugging.

Thank you.