Please provide complete information as applicable to your setup.
- Jetson AGX Xavier (bare metal, no container)
- DeepStream 5.0
- Jetpack 4.5
I have more of a philosophical question than a demonstrable problem…
I’m hoping those further-up the learning curve can help answer a question I’m being pressed on.
I’m just embarking on a video analysis challenge, with requirements to operate at high frame rates and low latency. There appear to be zero-copy pixel data paths from camera sensor to memory (e.g. for camera interfaces like PCIe) which can fulfill the latency constraints (1 frame delay, max, sensor to GPU memory.) So far, so good. :-)
Now I’m told the camera sensor may be installed rotated 90 degrees. How worried should I be about this?
I know GPUs can rotate images at pixel-fill rates, which is pretty-darn-quick. But would this necessarily imply a buffer copy (i.e. harmful-to-latency double-buffering) before frames are flowed into detection code?
Alternately, perhaps we should run analysis on the rotated image, and then rotate the image (and the bounding boxs) back up-right at output rendering. Would we need to train with rotated images, or do the algorithms confidently accept that objects may be laying on their side?
I’m sorry not to be more specific at this point, but I’m hoping to get a little helpful advice, or assurance, from anyone with a more experience and/or more imagination…