Recording video from camera

Glad to see it worked out.
The second term signal may not be be relevant for your case (it is sometimes required for terminating a pipeline with EOS and nvarguscamerasrc).

You would just change the VideoCapture from V4L VideoCapture(0) to the gstreamer pipeline capture. Then I would expect the framerate could be read. Be sure to comment imshow. If you need display, it should be possible to use one from VideoWriter.

recordVideo.py (958 Bytes)

Attached is what I attempted based on what I think you are suggesting, but I still get a failed to open output error msg.

I added the following after the CAP to see what it was returning:

w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))

The result is: Src opened, 0x0 @ 0 fps

so it doesn’t look like it is opening the capture source.

This is what I am using but I have tried many modifications and so far no success.

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 ", cv2.CAP_GSTREAMER)

Seems the capture pipeline failed to start.

You would try this customization of your code:

import cv2

# Get information about your opencv build options
print(cv2.getBuildInformation())

# Open camera
cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=1920,height=1080,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 ", cv2.CAP_GSTREAMER)
if not cap.isOpened():
    print("Failed to open camera")
    exit()

# Open writer to H264/MKV file
gst_out = "appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! video/x-h264,format=byte-stream ! h264parse ! matroskamux ! filesink location=test-nvh264-writer.mkv "
out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(cap.get(cv2.CAP_PROP_FPS)), (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
if not out.isOpened():
    print("Failed to open output")
    exit()

# Loop
while(cap.isOpened()):
    ret, frame = cap.read()
    if ret==True:
        out.write(frame)
        #cv2.imshow('frame',frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

cap.release()
out.release()
cv2.destroyAllWindows()

and post the output for giving some details.

Output:

General configuration for OpenCV 4.0.0 =====================================
  Version control:               unknown

  Extra modules:
Location (extra):            /home/name/opencv_contrib/modules
Version control (extra):     unknown

  Platform:
Timestamp:                   2021-01-15T19:02:50Z
Host:                        Linux 4.9.140-tegra aarch64
CMake:                       3.13.3
CMake generator:             Unix Makefiles
CMake build tool:            /usr/bin/make
Configuration:               RELEASE

  CPU/HW features:
Baseline:                    NEON FP16
  required:                  NEON
  disabled:                  VFPV3

  C/C++:
Built as dynamic libs?:      YES
C++ Compiler:                /usr/bin/c++  (ver 7.5.0)
C++ flags (Release):         -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
C++ flags (Debug):           -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wno-narrowing -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
C Compiler:                  /usr/bin/cc
C flags (Release):           -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
C flags (Debug):             -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-narrowing -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections    -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
Linker flags (Release):      
Linker flags (Debug):        
ccache:                      NO
Precompiled headers:         YES
Extra dependencies:          dl m pthread rt
3rdparty dependencies:

  OpenCV modules:
To be built:                 aruco bgsegm bioinspired calib3d ccalib core datasets dnn dnn_objdetect dpm face features2d flann freetype fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc java_bindings_generator line_descriptor ml objdetect optflow phase_unwrapping photo plot python3 python_bindings_generator reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
Disabled:                    world
Disabled by dependency:      -
Unavailable:                 cnn_3dobj cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv java js matlab ovis python2 sfm viz
Applications:                tests perf_tests examples apps
Documentation:               NO
Non-free algorithms:         YES

  GUI: 
GTK+:                        YES (ver 3.22.30)
  GThread :                  YES (ver 2.56.4)
  GtkGlExt:                  NO
VTK support:                 NO

  Media I/O: 
ZLib:                        /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
JPEG:                        /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
WEBP:                        build (ver encoder: 0x020e)
PNG:                         /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.34)
TIFF:                        /usr/lib/aarch64-linux-gnu/libtiff.so (ver 42 / 4.0.9)
JPEG 2000:                   build (ver 1.900.1)
OpenEXR:                     build (ver 1.7.1)
HDR:                         YES
SUNRASTER:                   YES
PXM:                         YES
PFM:                         YES

  Video I/O:
DC1394:                      NO
FFMPEG:                      YES
  avcodec:                   YES (ver 57.107.100)
  avformat:                  YES (ver 57.83.100)
  avutil:                    YES (ver 55.78.100)
  swscale:                   YES (ver 4.8.100)
  avresample:                NO
GStreamer:                   NO
v4l/v4l2:                    linux/videodev2.h

  Parallel framework:            pthreads

  Trace:                         YES (built-in)

  Other third-party libraries:
Lapack:                      NO
Eigen:                       NO
Custom HAL:                  YES (carotene (ver 0.0.1))
Protobuf:                    build (3.5.1)

  OpenCL:                        YES (no extra features)
Include path:                /home/name/opencv/3rdparty/include/opencl/1.2
Link libraries:              Dynamic load

  Python 3:
Interpreter:                 /home/name/.local/bin/.virtualenvs/env1/bin/python3 (ver 3.6.9)
Libraries:                   /usr/lib/aarch64-linux-gnu/libpython3.6m.so (ver 3.6.9)
numpy:                       /home/name/.local/bin/.virtualenvs/env1/lib/python3.6/site-packages/numpy/core/include (ver 1.16.1)
packages path:               lib/python3.6/site-packages

  Python (for build):            /usr/bin/python2.7

  Java:                          
ant:                         NO
JNI:                         NO
Java wrappers:               NO
Java tests:                  NO

  Install to:                    /usr/local
-----------------------------------------------------------------

is the problem. You may rebuild and reinstall opencv using one of these scripts.

I am on it. I will report back results.

Thank you so much for the support these past few days.

Ok, So recompiled CV2 with Gstreamer. I now get this with build information:

Video I/O:
    DC1394:                      NO
    FFMPEG:                      YES
      avcodec:                   YES (ver 57.107.100)
      avformat:                  YES (ver 57.83.100)
      avutil:                    YES (ver 55.78.100)
      swscale:                   YES (ver 4.8.100)
      avresample:                NO
    GStreamer:                   
      base:                      YES (ver 1.14.5)
      video:                     YES (ver 1.14.5)
      app:                       YES (ver 1.14.5)
      riff:                      YES (ver 1.14.5)
      pbutils:                   YES (ver 1.14.5)

I have tried multiple configurations of my cap= statement but no matter what I get the following error:

(python3:8867): GStreamer-CRITICAL **: 09:38:33.873: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
Failed to open camera

Any ideas?

Ok, so this seems to be the winner:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

This now opens the stream and records the video. This is the same as the video recorded using the above GST-launch command. I cannot open with VLC on the Xavier but can play it with a different PC. Both of these however still have one issue. They are played back at 2x speed.

On windows, if I look at the video properties it shows frame rate of 15fps.

Printing the output of cv2.CAP_PROP_FPS gives me 30FPS.

It may be an issue with player, I don’t see that.
From the test_h264.mkv file recorded from gstreamer, what gives:

# I see framerate=30/1 from this
gst-launch-1.0 filesrc location= ./test_h264.mkv ! matroskademux ! h264parse ! fakesink -v

# I see a solid 30 fps when decoding on Jetson, even setting sync=true, and even with over video sinks.
gst-launch-1.0 filesrc location= ./test_h264.mkv ! matroskademux ! h264parse ! nvv4l2decoder ! fpsdisplaysink video-sink=fakesink text-overlay=0 sync=1 -v

yes it is playing back at 30fps, on the Jetson as well. I think the issue is that it has dropped every other frame so I am encoding 15fps video as 30fps video. Then when it plays back it is double speed.

If I set the FPS to 15 manually in the “out” statement then the video playback speed is correct.

How can I get it to record the full 30FPS without dropping frames?

On another note, I am noticing that the file size is quite small compared to the MP4 files I was recording before. Is it possible to reduce compression or record the video stream uncompressed?

You may try to find where is the bottleneck. First check what your V4L can provide:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink -v

If this doesn’t show 30 fps, there is a problem with your USB cam. Be sure it is connected at least in USB2 (USB3 would be better) with lsusb -t. You may also try v4l2src property io-mode=2:

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink -v

If the capture can run 30 fps, check the YUV to RGB conversion:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink -v

If this is where it slows down, you may try to perform the YUV → BGRx with HW:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink -v

If this is ok, then the problem may be with opencv. Don’t use imshow and try lower modes provided by your camera.

first two are solid 30FPS.

Third one is ~17-18FPS

Fourth one is mostly 30FPS but not stable and sometimes drops to ~26FPS

So what do I need to do to perform the conversion from YUV with HW?

The HW conversion is done in 4th pipeline, however nvvidconv doesn’t provide BGR format but BGRx, so you have to use CPU videoconvert for removing the 4th byte in each 32 bits pixel.
As you seem close to 30 fps, you may try:

  • adding queue before appsink, so that opencv app may be scheduled to another CPU core.
  • adding another queue before videoconvert, or after v4l2src. May depend on what load all CPU already have.
  • use n-threads property of videoconvert instead, or in combination with the above, you may test various configurations.
  • try to do the BGRx conversion in the first nvvidconv copy to NVMM.

I tried adding in queues and n-thread as follows:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! queue ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! queue ! videoconvert n-threads=4 ! video/x-raw,format=BGR ! queue ! appsink", cv2.CAP_GSTREAMER)

No change.

If I try and change format=BGR to BGRx in the cap statement it errors out.

I’d suggest trying just with gstreamer for now so that you get correct framerate if possible before involving opencv.

Sorry I made a typo in previous post (corrected now).
For moving the YUV->BGRx conversion in the first nvvidconv instance, you would try:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=BGRx' ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! fpsdisplaysink text-overlay=0 video-sink=fakesink -v

You may also boost your Jetson if not already done:

# Max performance mode. With Xavier NX, alternatetively use -m2 for 6cores-15W
sudo nvpmodel -m0

# Boost clocks
sudo jetson_clocks

that command returns 30FPS now with a bit of variability to 29.96FPS at minimum.

Boosting the jetson does not show a difference, above command has same variability either way.

With the change all 4 of the commands of your previous message return 30FPS now.

So probably videoconvert was the bottleneck doing full YUV-> BGR conversion.
Now you’ll see if opencv is next bottleneck and let us know your best working solution.

How do I convert that gst-launch command into the opencv command?

My attempt just fails:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=YUY2,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=BGR' ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink", cv2.CAP_GSTREAMER)

Remove the single quotes from the string. These are only required from shell so that it doesn’t interpret the parentheses.