Compatibility of TAO and DeepStream Environments for Engine Building

  • Hardware Platform: GPU
  • DeepStream Version: 6.4
  • TensorRT Version:** 8.6 (for DeepStream target), 8.5 (for TAO training environment)
    NVIDIA GPU Driver Version : 535 (for DeepStream target), 560 (for TAO training environment)
    Issue Type (questions, new requirements, bugs): Questions

I want to use the same PC for training TAO models and building engines for Deepstream, but the TAO version I use for training my models may use different GPU driver version, CUDA version and TensorRT version than my target Deepstream version. For example, Deepstream 6.4 uses GPU driver version 535, CUDA 12.2 an TensorRT 8.6, but TAO 5.5.0 uses GPU driver version 560, CUDA 12.6 and the associated nvidia-tao-deploy wheel uses TensorRT 8.5. When I build an engine for Deepstream, does the GPU driver version, CUDA version and TensorRT version of the build enviroment need to match that of the target Deepstream version?

  1. are the TAO Training and Deepstream test in different docker container?
  2. yes, please use the same TRT version to generate engine and use engine. DeepStream also supports generating engine, you can set onnx model path in configuration file. please refer to this sample.

Thanks.
We want use the same PC (PC A) to train models using TAO and build Deepstream engines, these engines will be used by a different PC (PC B) running Deepstream outside a docker container. We are trying to figure out how to set up the PC we use for both training TAO models and building Deepstream engines.
For an engine built on PC A to run on PC B, does PC A & PC B need to have the same GPU driver version, CUDA version and TensorRT version or only the TensorRT version need to match?

At least, the TensorRT version should be same. TensorRT is dependent on CUDA, please refer to this doc. CUDA is dependent on driver, plese refer to this doc.

Thank you for your answers. I have another question: I am using TAO Deploy 5.5.0, which uses TensorRT version 8.6. However, the target machine for deployment runs DeepStream 7.1, which uses TensorRT version 10.3.

Can I still use TAO Deploy 5.5.0 to build TensorRT engines for use with DeepStream 7.1? If not, is there any solution to address this version mismatch?

no, you can convert TAO model to TRT10.3 engine on the TRT 10.3 machine.

To confirm that I undenstand you correctly. In the documentation, there are two options to deploy a model trained by TAO to DeepStream as shown in the screenshot I attached. Since the target machine for deployment uses TensorRT v10.3, but TAO Deploy 5.5.0 uses TensorRT v8.6, I must use option 1 in this case?

YES, Deepstream will call TensorRT interface to convert model to TRT engine.

Thanks for your response.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.