There might be a problem here. Your pipeline has nvoverlaysink as sink. This pipeline makes no processing, it doesn’t use opencv, it is just lauched by opencv.
For reading from opencv videoio, you would have to convert into BGR and use appsink as sink:
cv2.Videocapture('filesrc location=test.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
# or
cv2.Videocapture('filesrc location=test.mp4 ! qtdemux ! queue ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
Then do your processing with opencv and display results. imshow may work for low resoution*fps.
For higher values, you may use a videowriter to nvoverlaysink or nveglglessink.
If you want to use GPU, you may also get RGB frames with jetson-utils such as this example.
For your questions:
- The pipeline in your post 1 uses omxh264dec that runs on HW decoder. You may check its activity as NVDEC running tegrastats. It outputs into NVMM memory so that nvoverlaysink can directly display.
- First one is limited by storage+filesystem speed. Second one is limited by network and adapter and drivers, plus some buffering options.
- I cannot advise, I have poor knowledge of these samples.