PeopleNet on MX130/ GTX1060Q

Hi , I need to run the peoplenet model inference on either MX130/ GTX1060Q gpu system . I am having few queries on this

  1. can TLT be setup on these GPU systems , because i am not meeting the intial requirements mentioned on the doccuments
  2. to convert TLT models to tensorrt engines should we have jetpack with or is it possible without those

TLT is designed to run on x86 systems with a NVIDIA GPU such as a GPU-powered workstation or a DGX system or can be run in any cloud with a NVIDIA GPU.

For inference, models can be deployed on any edge device such as the embedded Jetson platform or in data center GPUs like T4.

So, yes, for inference, you can run on MX130/ GTX1060Q gpu system.

To convert TLT models to tensorrt engine, please download tlt-converter. Then run it against etlt model to generate the trt engine.
It is recommended to install CUDA/cudnn/TRT via Jetpack.

Thanks for the response ,
1.i am having MX130/ GTX1060Q as laptop with one card each so based on your comments it will not be able to setup TLT right
2.i donot have any embedded devices so i cannot installCUDA/cudnn/TRTvia Jetpack so i set it up individual , since i am not albe to install TLT then iwould also not able to use the TLT-converter right ?
The intent is i want to run peoplenet inference on laptop system which has MX130/ GTX1060Q is it possible if so what steps to be followed ??

  1. For TLT training, please check if your laptop can meet the requirement Transfer Learning Toolkit — Transfer Learning Toolkit 3.0 documentation

  2. If you do not have any embedded devices, you can still run inference in your host PC. The tlt-converter supports this. See Overview — TAO Toolkit 3.22.05 documentation

@Morganh i able to sucessfullly run peoplenet on MX130 system with converting etlt model to tensorrt engine but when use to the converted model on 1080Ti with same cuda version 11.1 and tensor rt version 7.2.3 the model is not working , Then when downloaded the model in the 1080Ti system and converted to trt engine it started working , does compute capability of the different systems is the issue or any other might have caused the error ???

Yes, the compute capability is one of the reasons. So, it is recommended to generate trt engine in the device where you want to run inference.

1 Like