Timeshift Encode/Decode and OCV pipeline

Hi Folks,

I am looking to create ‘time shift pipeline’ using Gstreamer. This refers to actual recording of scene using cameras, putting the resultant bitstream on disk , and then re-reading the same video (after some delay) for further OpenCV processing, while recording is going on.

We are looking to record videos on tegra hardware (Tx1/Tx2) and then process the recorded videos further.

Could someone please give any pointer/gstreamer code/Argus/Visionworks example for a time shift pipe ?

Thanks

Hi,
For encoding, I think you can use hlssink
gst-launch-1.0 nvcamerasrc num-buffers=300 ! ‘video/x-raw(memory:NVMM),width=1280,height=720’ ! omxh264enc ! mpegtsmux ! hlssink

It encodes into multiple files every 15 second(by default). You can process, delete, and process the next video files.

Hi DaneLLL

Any idea how to install hlssink on Tx2 ?

I ran following command line -

ubuntu@tegra-ubuntu:~/work/Experiment$ gst-launch-1.0 -e nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! omxh265enc ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! matroskamux !hlssink
bash: !hlssink: event not found

Thanks

hlssink should be in gstreamer1.0-plugins-bad

Hi DaneLLL

I downloaded gstreamer1.0-plugins-bad and I do not see hlssink in there.

sudo find / -name '*sink'
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tools/element-templates/basesink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tools/element-templates/videosink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tools/element-templates/audiosink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/avsamplesink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/waylandsink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/waylandsink/gtkwaylandsink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/waylandsink/.libs/gtkwaylandsink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/gtk/gtksink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/tests/examples/gtk/gtkglsink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/sys/d3dvideosink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/sys/dshowvideosink
/home/ubuntu/Downloads/gst-plugins-bad-1.8.3/ext/apexsink
/home/ubuntu/.config/pulse/ae15719763c84b35196c20a95728b806-default-sink
sudo find / -name '*sink'

Steps to install gstreamer1.0-plugins-bad
https://devtalk.nvidia.com/default/topic/1023577/jetson-tx2/hlssink/post/5207537/#5207537

Thanks DaneLLL I am able to install and run hlssink.

Going back to my original topic of timeshift, I am looking to encode from camera, write in a file , and then read it back to decode.

Is there a way command line like this can work ?

gst-launch-1.0 -e nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! omxh265enc ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! matroskamux ! filesink location=/home/ubuntu/junk.mkv  filesrc location=/home/ubuntu/junk.mkv ! decodebin ! nvoverlaysink -e

When I tried encode followed by decode without file - it seems to work - although with lots of jerks/frame drops …

gst-launch-1.0 -e nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! omxh265enc ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! matroskamux ! decodebin ! nvoverlaysink -e

With aforesaid command line I see frame drops and following messages …

ubuntu@tegra-ubuntu:~$ gst-launch-1.0 -e nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! omxh265enc ! 'video/x-h265, stream-format=(string)byte-stream' ! h265parse ! matroskamux ! decodebin ! nvoverlaysink -e
Setting pipeline to PAUSED ...

Available Sensor modes : 
3864 x 2174 FR=60.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 3864x2174 FrameRate = 60.000000 ...

New clock: GstSystemClock
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8 
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
===== NVENC blits (mode: 1) into block linear surfaces =====
NvMMLiteOpen : Block : BlockType = 279 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 279 
TVMR: cbBeginSequence: 1223: BeginSequence  1920x1088, bVPR = 0
TVMR: LowCorner Frequency = 180000 
TVMR: cbBeginSequence: 1622: DecodeBuffers = 3, pnvsi->eCodec = 10, codec = 9 
TVMR: cbBeginSequence: 1693: Display Resolution : (1920x1080) 
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (1920x1080) 
TVMR: cbBeginSequence: 1762: ColorFormat : 5 
TVMR: cbBeginSequence:1776 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 10, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence  ColorPrimaries = 2, TransferCharacteristics = 2, MatrixCoefficients = 2
Allocating new output: 1920x1088 (x 10), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 1920, nFrameHeight = 1088 
TVMR: FrameRate = 1000 
TVMR: NVDEC LowCorner Freq = (576000 * 1024) 
WARNING: from element /GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0:
There may be a timestamping problem, or this computer is too slow.
---> TVMR: Video-conferencing detected !!!!!!!!!
WARNING: from element /GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0:
There may be a timestamping problem, or this computer is too slow.
^Chandling interrupt.
Interrupt: Stopping pipeline ...
EOS on shutdown enabled -- Forcing EOS on the pipeline
Waiting for EOS...
TVMR: NvMMLiteTVMRDecDoWork: 6768: NVMMLITE_TVMR: EOS detected
TVMR: TVMRBufferProcessing: 5723: Processing of EOS 
TVMR: TVMRBufferProcessing: 5800: Processing of EOS Done
Got EOS from element "pipeline0".
EOS received - stopping pipeline...
Execution ended after 0:00:04.030194830
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
TVMR: TVMRFrameStatusReporting: 6369: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6179: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6219: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8105: Done 
Setting pipeline to NULL ...
Freeing pipeline ...

What would be right way to encode --> to a file --> read from the same file --> decode --> feed to opencv pipe using code given by you here …

https://devtalk.nvidia.com/default/topic/1011376/jetson-tx1/gstreamer-decode-live-video-stream-with-the-delay-difference-between-gst-launch-1-0-command-and-appsink-callback/post/5160929/#5160929

Thanks,

Hi dumbogeorge,
I am not sure if it works by reading/writing the same file, so I suggest use hlssink. Some other users may share experience.

If you don’t need it to be mp4/mkv, you can consider use tegra_multimedia_api. Have a thread store into multiple h264 files. Have another thread read the files for analysis and then delete the processed files.

Hi DaneLLL

Just thinking out loud, how to feed multiple files generated by hlssink to decoder element in subsequent part of gstreamer pipeline ? Would the decoderbin not complain if a data from file_00 is interrupted midway and later supplied from file_01 ?

I do need mp4/mkv and cannot delete the intermediate recorded files. But would like to explore generation of raw H264/H265 bitstreams (without mp4/mkv container) such that I can try to use tegra_multimedia_api examples. How would encoder generate raw bitstreams - any example gstreamer pipelines ?

Thanks,

Hi dumbogeorge,
tegra_multimedia_api is not gstreamer based. You can install samples via Jetpack and check
tegra_multimedia_api/samples/00_video_decode/
tegra_multimedia_api/samples/01_video_encode/

Looks like hlssink is not 100% fit your case. Maybe you can try to get help from http://gstreamer-devel.966125.n4.nabble.com/