Partial offloading Deep learning on Jetson TX2 and Jetson AGX

Hi everyone,

I’m working on project of Lane detection models, where i need to divide a pytorch defined model in many component and each component will be executed on an edge device in order to improve latency and memory consumption ( See attached figure ) in my case i have jetson tx2 and jetson agx.

My questions are :

  1. Is there a method to easily communicate data between the two devices with a minimum time of data transmission possible.
  2. Is there framwork to divide a pytorch model and get results of a specific layer
  3. what is the best struture to store the data that we want to send to another device

And finally I have another question a little off topic: is it possible to run a lane marker detection model on the camera of the jetson tx2 in real time, I tried with the LaneATT model but I could not make it work in real time.

thank you in advance.

image

Hi,

1. You can check our Deepstream SDK.
It can send out payload messages with nvmsgbroker.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmsgbroker.html

2. You can mark the layer as output with our ONNX graphsurgeon tool.
Then the tensor can be retrieved with either ONNXRuntime or TensorRT.

3. It’s recommended to check if Deepstream can meet your requirements or not.

The detection performance depends on the model you used.
In our previous experience, there are lots of models that can reach 30fps on TX2.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.