(Rotated Boundingboxes) Infer live video on the jetson Platform with onnx model from the ODTK

Hello together,

i am a bit overwhelmed and don’t know how to go forward in my Project and hope to get some direction to go.

Overview/Intro:
My goal is to automatically recognise and cut out labels (all kinds of Labels on Parcels or smd wheels or anything else) from a live video-stream, probably raspberry pi HQ-camera .
As Labels can be randomly rotated i used the Object detection Toolkit (ODTK) to also get informations about the orientation / rotation for later cut out and rotation. The ODTK supports rotated bounding boxes and i already trained a network with my own dataset successfully for this stage.
I now have a model as .pth and also converted with the ODTK to onnx (Backbone ResNet18FPN).
The model should run live on a Jetson device (i have access to the whole Jetson family) .
I have some experience in Python and C# but never touched C or Cpp.

Problem:
as stated i am a bit overwhelmed what would be the best rout to continue. It should be easy to program (for me that would be Python) and use hardware accelerations of the Jetson devices to run as fast as possible (>8 FPS)

  1. I could try to use G-streamer/DEEPSTREAM but would need to write some code plugin to be able to handle the custom Network. On first try, it didn’t do much with my network (didn’t run, cant remember the error)
  2. ODTK inference worked with images and recorded video, so it is maybe possible to modify it to run live video.
  3. There are the Hello Ai World tools, but i was not able to load in my onnx model successfully and it would also probably need some modification to support rotation.
  4. Ore maybe write TensorRT/Tensorflow code for inference.
  5. ONNX RuntimeRuntime also seems like a possible route?

Ore maybe im missing something already done for live inference with rotated bounding Boxes?

So my Questions are:
What way would you recommend to dive in to and bite through.
What Platform should i use (Jetson Nano/NX/AGX-Xavier, compatibility is more important for me then price)

• Hardware Platform (Jetson / GPU): Jetson Nano/NX/AGX-Xavier
• Issue Type( questions, new requirements, bugs): Question

Hi @DMA ,
Questions for Jetson platform secltion:

  1. Can your onnx model run with TensorRT?
  2. What platform did you try with TensorRT tool - trtexec ? And what’s the performance ?

DeepStream Pipeline/Sample to start:
Since you are good at python and not familar with C or C++, you could start with DeepStream python sample - apps/­deepstream-test1 (code )

Othere question:
Is it need to rotate the image before infernece?

I have the same problem. but @mchi seems you don’t understand what @DMA asked.
We use as reference the following article: Detecting Rotated Objects Using the NVIDIA Object Detection Toolkit | NVIDIA Developer Blog
It works but only using the “odtk infer” command. But we want to use for a live video-stream on a Jetson device. So, what platform we can use because DeepStream has no an available plugin for rotated-bounding boxes.
Cheers

2 Likes

Are there any updates on this topic? Can DeepStream support rotated bbox models?

1 Like

DeepStream does not support rotate bbox.
But you could add a probe in the sink pad of sgie to rotate the BBOX with NPPI nppiWarpAffine*() API.

Attachment is the sample of nppiWarpAffine*()

testwarpaffinebatch.tgz_ (142.7 KB)