Question about how to check the project is running on GPU by using tensorrt

When I try to implement a project on PX2, I used the C++ API and create cuda engine by" engine.reset(createCudaEngine(onnxModelPath, batchSize));
if (!engine)
return 1;"
and also create the Execution Context,by"

However, the program is still running really slowly, thus I guess the program is not running on GPU, so is there a command I can use whether I use GPU and how can I use GPU to run my project

More generally, how can I know whether my project is using GPU to calculate? How can I use the GPU to run my program. I am a rookie and it’s my first time to get exposed to the PX2 platform.

Dear @yingfanzhou9927,
The engine building takes a bit time as it has to profile different set of CUDA kernels(for each DL layer implementation) and find optimized kernel for the given configuration. Once the engine is built, you can use re-use it to infer multiple images.

how can I know whether my project is using GPU to calculate?

You can use profiling tools like nvprof, nsight systems to know GPU utilization.

Dear @yingfanzhou9927,
Could you please use your business email next time you post queries. Thanks

OK, Thanks.

SivaRamaKrishnaNV via NVIDIA Developer Forums <> 于2020年8月25日周二 下午2:00写道: