CC: @Honey_Patouceul @DaneLLL @amycao @kayccc @WayneWWW @icornejo.a @AastaLLL
I am currently using Jetson Nano as it was available for prototype level of testing and readiness.
Since I would like to extend it to 2 CSI based camera and general-purpose CUDA programing based applications,
How to compare the benchmarks of different GPU performance of Jetson and NVIDIA GPU chipsets for my scalable applications. Example: Heaven Unigine Benchmark
Is there any benchmark information on GPU performance in various chips are available ?
Hi @techguyz, you can find DNN inferencing benchmark results across the Jetson family on this page:
Thanks for the details. Is it possible to use the available GPU cores to partition and use two models at once to achieve parallel computing with divided performance.
You needn’t manually do that, the CUDA scheduler will handle it. CUDA kernels can be executed concurrently if they are launched on separate CUDA streams. For example, if you use TensorRT’s
enqueue function on a different CUDA stream for your two models: