I am using a TX2i board together with an Elroy Carrier to capture raw data stream from a v4l2 camera, compress the video with H265 encoding, and send the encoded video to another device through serial (UART). Afterwards, on the other device, I am decoding and separating the video file into jpg images. I am facing two challenges:
1- The mp4 data I am sending from the TX2i is not completely identical to the data I am receiving on the other end. They are like around 99.9% identical, with just a few hundred bytes missing over a large file (compared through hex editor). However, these missing bytes are making the received video data unplayable. The original file is playable. I want either the encoding or decoding pipeline to be robust towards minor imperfections in the file (missing bytes). What modifications should I make on the pipeline(s) to solve this?
2- I will need to start capturing the data at an arbitrary moment, meaning that the header data in the original file will be missing and the file format will be messy. I think I need a configuration that allows the pipeline to send the header metadata regularly, rather than once. What modifications should I make on the pipeline(s) to solve this?
The current pipeline capturing the video data, encoding and sending it to the application looks something like this (snippet of the C code):
…
// Create the elements.
data->v4l2_source = gst_element_factory_make (“v4l2src”, “v4l2_source”);
data->video_rate = gst_element_factory_make (“videorate”, “video_rate”);
data->omx_h265_enc = gst_element_factory_make (“omxh265enc”, “omx_h265_enc”);
data->h265_parse = gst_element_factory_make (“h265parse”, “h265_parse”);
data->mp4_mux = gst_element_factory_make (“mp4mux”, “mp4_mux”);
data->encode_sink = gst_element_factory_make (“appsink”, “encode_sink”);
…
// Configure the elements.
g_object_set (data->encode_sink, “emit-signals”, TRUE, NULL);
g_signal_connect (data->encode_sink, “new-sample”, G_CALLBACK (new_sample_encoded), data);
g_object_set (data->video_rate, “drop-only”, TRUE, “max-rate”, FPS, NULL);
g_object_set (data->omx_h265_enc, “bitrate”, 200000, “peak-bitrate”, 1000000, “preset-level”, 3, NULL);
g_object_set (data->mp4_mux, “fragment-duration”, 1000, NULL);
…
The current pipeline decoding the video data and separating it into its jpg frame components is like this:
gst-launch-1.0 filesrc location=/home/user/Desktop/test.mp4 ! qtdemux ! h265parse ! omxh265dec ! videoconvert ! jpegenc ! multifilesink location=/home/user/Desktop/test/%05d.jpg