Capturing and saving frames prior to detection

I have a working app using jetson.inference in python that starts saving video frames to file once a detection occurs. What I need to to is capture the frames prior to detection as well for a few seconds. I tried saving the cudaImages in a python deque for a fixed number of frames, so I always have, say 200 frames in the deque I can render to output prior to my detection. But I don’t get the expected frames in the video. Does a cudaImage need to copied to be saved or is there some other method to accomplish this ?

Hi @brking, are you saving these frames to a compressed video? If so, are you creating a new video file each time? For testing can you try saving them as a sequence of images instead?

Right now I use jetson.utils.videoOutput to write the frames to an mp4 file. I do this for 10 seconds after the first detection occurs. I’m still not clear how I can keep a short buffer of video/frames in memory that I can write out to videoOutput before I start writing the detection frames. Since I have to do this continuously (to have frames ready when I get a detection), I don’t want to write these to files. So I basically need to be able to keep the last couple of seconds of video in memory somewhere, and be able to grab that and write it to videoOutput. My approach was to just take the cudaImage returned from Capture and push it into a deque of fixed size. Then when I get a detection, pop off those frames and send to videoOutput. But I suspect the capture images are not long lived, and would need to be cloned or something to save.

You are correct in that the cudaImage frames returned from videoSource.Capture() are in a ringbuffer, so they will be overwritten with a latest image. What I recommend is that you allocate your own ringbuffer of images with jetson.utils.cudaAllocMapped() and then copy the data to these.

ringbuffer = []
next_ringbuffer = 0

while True:
       img = input.Capture()

       # the first iteration, allocate ringbuffer of images the same size/format as the camera
       if len(ringbuffer) == 0:
              for i in range(100):    # allocate 100 frames (adjust this as needed)
                   ringbuffer.append(jetson.utils.cudaAllocMapped(width=img.width, height=img.height, format=img.format))

       # get the next image from the ringbuffer to use
       ringbuffer_img = ringbuffer[next_ringbuffer]
       next_ringbuffer = (next_ringbuffer + 1) % len(ringbuffer)

       # copy the image - use cudaConvertColor because there isn't cudaMemcpy binding for Python
       # (this will be executed as a cudaMemcpy because the image color formats match)
       jetson.utils.cudaConvertColor(img, ringbuffer_img)

I use a ringbuffer here so you only have to allocate the images once (because allocation is an expensive operation), and their reference count will remain >= 1 because they remain in the ringbuffer array so Python won’t garbage collect them. From here, you can put the ringbuffer image into a queue.

I will give that a shot. Thanks for all of the details and code Dusty.

This works for the most part. The odd thing is the frames I’m saving in the ringbuffer play at a higher speed in the final video than the frames I collect live. Here are the steps I’m taking

  • Constantly save last 200 frames in the ring buffer and a python deque
  • Wait for detection
  • Create a videoOutput instance to mp4
  • pop the ringbuffer images off the queue one at a time and render to videoOutput
  • render live img frames to that same videoOutput instance for the next 10 seconds

So what controls the timing between frames ? I thought this would be constant. Ohhh wait I bet the video plays at whatever speed you render the frames to videoOutput. Ugh.

Yea, GStreamer automatically timestamps the images when they arrive to the output video. You can try removing do-timestamp=true format=3 from this line of code:

https://github.com/dusty-nv/jetson-utils/blob/ebab1914877a51d4d33fa9b1f01b168adb712a32/codec/gstEncoder.cpp#L259

Then re-run make and sudo make install

Is this the correct format for that line ?

ss << "appsrc name=mysource is-live=true ! ";

I get some new errors and no video now.

------------------------------------------------
----> rendering 200 saved frames
RingBuffer -- allocated 2 buffers (1382400 bytes each, 2764800 bytes total)
[gstreamer] gstEncoder-- starting pipeline, transitioning to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> filesink1
[gstreamer] gstreamer changed state from NULL to READY ==> qtmux1
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse2
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter2
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264enc-omxh264enc1
[gstreamer] gstreamer changed state from NULL to READY ==> mysource
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline2
[gstreamer] gstreamer changed state from READY to PAUSED ==> qtmux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse2
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter2
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264enc-omxh264enc1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysource
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline2
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> qtmux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> omxh264enc-omxh264enc1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysource
[gstreamer] gstEncoder -- new caps: video/x-raw, width=1280, height=720, format=(string)I420, framerate=30/1
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
H264: Profile = 66, Level = 40 
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
[gstreamer] gstEncoder -- pipeline full, skipping frame (1382400 bytes)
NVMEDIA_ENC: bBlitMode is set to TRUE 
[gstreamer] gstreamer message stream-start ==> pipeline2
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesink1
[gstreamer] gstreamer message async-done ==> pipeline2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> filesink1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline2
[gstreamer] gstreamer qtmux1 ERROR Could not multiplex stream.
[gstreamer] gstreamer Debugging info: gstqtmux.c(4561): gst_qt_mux_add_buffer (): /GstPipeline:pipeline2/GstQTMux:qtmux1:
Buffer has no PTS.
[gstreamer] gstreamer omxh264enc-omxh264enc1 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: /dvs/git/dirty/git-master_linux/3rdparty/gst/gst-omx/omx/gstomxvideoenc.c(1383): gst_omx_video_enc_loop (): /GstPipeline:pipeline2/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc1:
stream stopped, reason error
[gstreamer] gstreamer qtmux1 ERROR Could not multiplex stream.
[gstreamer] gstreamer Debugging info: gstqtmux.c(4561): gst_qt_mux_add_buffer (): /GstPipeline:pipeline2/GstQTMux:qtmux1:
Buffer has no PTS.
[gstreamer] gstreamer mysource ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline2/GstAppSrc:mysource:
streaming stopped, reason error (-5)


I haven’t tried it without timestamping before, and unfortunately it seems that do-timestamp is necessary unless you manually set the timestamps. My next guess is to manually set the timestamp to be monotonically increasing. You can try adding something like this here (and leave do-timestamp off):

https://github.com/dusty-nv/jetson-utils/blob/ebab1914877a51d4d33fa9b1f01b168adb712a32/codec/gstEncoder.cpp#L472

const int fps = 30;
mTimestamp += gst_util_uint64_scale_int (1, GST_SECOND, fps);
GST_BUFFER_PTS (gstBuffer) = mTimestamp;
GST_BUFFER_DTS (gstBuffer) = mTimestamp;
GST_BUFFER_DURATION (gstBuffer) = gst_util_uint64_scale_int (1, GST_SECOND, fps);

You’ll also need to add GstClockTime mTimestamp as a member variable to gstEncoder class and initialize it to 0 in the gstEncoder constructor. I haven’t done manual timestamping like this before, so it is my best guess.

Thanks, may be deeper than I want to go right now but I appreciate the tip. After the last change even after reverting the codec change I still had to nuke my build folder and start over. Not sure why but I’m back working again.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.