CUDA threading in Jetson Xavier separately

How can we use each cuda gpu separately for our own object detection like threading?

Hi,

Please find an example below:

Thanks.

Can you plz provide more information regrading this topic?
I want to run my object detection and capture frame python program in separate gpu task?

Hi,

The example shared above can deploy multiple models with threads in C++.
If you want a python based example, please check if the following sample can meet your requirement:

Thanks.

I already complete object detection and capture frame in one python program but i want to divide the two process separately and the two process assigned to 2 different gpu cores?

Hi,

Do you want to use two processes or two threads but in the same process.
Two processes will share GPU resources by design so you don’t need to add a manual control.

Please note that Jeston only has one GPU.
The resources sharing is implemented with time-slicing.

Thanks.

I want to specify the core manually.And my tensorflow model loading time was to long can anyone suggest any suitable solution regarding this pblm?

Hi,

How do you inference the TensorFlow model?
Do you run it with TensorFlow or TensorRT?

On Jetson, it’s recommended to convert the model into TensorRT for a better experience.
We have optimized it for Jetson and the memory requirement is more suitable for an embedded system.

You can find an example below:

Thanks.

can you suggest any real time example of tensor Rt with ssd mobilenet?

Hi,

Have you converted your model into TensorRT successfully?
If yes, based on our benchmark table, XavierNX can reach 900up fps with SSD Mobilenet-V1.

If you still find the pipeline slow, the bottleneck might come from the multimedia interface (ex. OpenCV).
Then you can refer to our Deepstream SDK example. It also has an SSD example in the python:

Thanks.