Platform: Nano B01
Raspi cam: v2.1 (IMX219)
Custom plugin repo: gst-snapshot-plugin on Github
Goal: I want to record at 120fps while saving a snapshot image every 500ms to send to a neural net.
What I’ve done: I installed the custom plugin linked above and got it running (after hours of debugging). It works fine for the sample pipeline (copy-pasted below) provided in the repo’s README, but…
Problem: The plugin’s src pad and sink pad are both in RGB format and I’m not sure how to convert my pipeline to RGB (and then later from RGB to h264)
What I’ve tried: I’ve tried using various combinations with the videoconvert element but get errors such as: “WARNING: erroneous pipeline: could not link videoconvert0 to snapshotfilter0” and “videoconvert0 can’t handle caps video/x-raw, width=(int)1280, height=(int)720, framerate=(fraction)120/1, format=(string)RGB”
Example failing pipeline:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=120/1, format=NV12' ! videoconvert ! 'video/x-raw,width=1280, height=720, framerate=120/1, format=RGB' ! snapshotfilter trigger=true framedelay=60 filetype="jpeg" location="image.jpg" ! videoconvert ! omxh264enc ! qtmux ! filesink location=tester1.mp4 -e
Sample pipeline from plugin’s repo (works, but doesn’t do exactly what I want):
gst-launch-1.0 videotestsrc num-buffers=20 ! snapshotfilter trigger=true framedelay=15 filetype="jpeg" location="image.jpg" ! videoconvert ! xvimagesink
- I can record 720p@120fps using a basic pipeline without any issues (82% cpu)
- Will this plugin/format conversation completely kill my fps, without doubt?
- Would I maybe be better of doing this with CUDA?
Here’s what I get when running $gst-inspect-1.0 snapshotfilter for the pad info