• Hardware Platform (Jetson / GPU): Jetson Orin Nano
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only): 5.1.1-b56
Hi,
I’m using the Jetson Orin Nano 8 GB in headless mode with a live video feed. I’m applying a model to the feed for inference from a USB camera and then use the nvdrmvideosink plugin to display the feed + predicted bounding boxes as well as some static graphs.
I need to use a Waveshare screen which has a vertical orientation by default: https://www.waveshare.com/wiki/5.5inch_HDMI_AMOLED
Thus, I need to somehow flip the video feed by 90° before displaying. I cannot flip the video before inference (from nvinfer plugin) as my model is trained on a “horizontal view” ; it need sthe unflipped feed directly from the camera. Thus, I need to apply the flip after inference has taken place in my gst-launch pipeline. However, when I use the nvvidconv flip-method=3 plugin to perform the flip, the bounding box position are not being flipped by the plugin with the video feed. They then appear at the wrong positions on the screen (inverted X & Y axis).
Here is my full command line:
/bin/gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, framerate=(fraction)30/1’ ! nvvideoconvert ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/pgie_yolov4_tao_config_test.txt batch-size=4 unique-id=1 ! nvtracker tracker-width=640 tracker-height=384 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ll-config-file=…/…/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml gpu-id=0 enable-batch-process=1 enable-past-frame=1 display-tracking-id=0 ! nvvideoconvert flip-method=3 ! dsexample full-frame=1 ! nvdsosd ! nvdrmvideosink sync=0 -ev
Is there a way to flip the video stream & the bounding box together? Or maybe a better way to approach this problem?
Thanks.
ML