Changing the properties of omxh264enc gstreamer 1.0 element

Hi,
I’m changing the parameters of the omxh264enc element in my gstreamer 1.0 pipeline but the image is always of the same bad quality at reception.

To build the pipeline, I link an appsrc element to videoconvert after calling gst_parse_launch with the following string:

“videoconvert name=ffmpeg ! capsfilter caps=video/x-raw,format=I420,width=640,height=480,framerate=30/1,pixel-aspect-ratio=1/1 ! omxh264enc control-rate=1 bitrate=800000000 quant-i-frames=100 quant-p-frames=100 quant-b-frames=100 iframeinterval=32 quality-level=1 low-latency=true no-B-Frames=true profile=1 ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=”“x.x.x.x”" port=5200 sync=false"

If I replace the omxh264enc with x264enc, I get a comparatively good image quality:

x264enc bframes=0 b-adapt=false speed-preset=1 tune=0x00000004

Is anyone able to configure the omxh264enc ?

The pipeline for stream reception:

gst-launch-1.0 udpsrc port=5200 ! “application/x-rtp, encoding-name=H264, payload=96” ! rtph264depay ! h264parse ! avdec_h264 ! xvimagesink sync=false

Thanks,

low-latency=1 and control-rate=1 (variable bitrate) don’t mix last I looked into it. You want control-rate=2 (constant bitrate) if you need low latency video. So that means all your quant-* parameters aren’t necessary since they’ll be ignored and overall video quality is controlled via the bitrate property. Live video situations you want a constant bitrate.

Your bitrate value of 800mbits is insane. Lower that to between 1000000 (1mbit) and 40000000 (40mbit) range. At such a low resolution of 640x480 just set it to 3000000 (3mbit) or maybe even 2mbit (2000000) since you’re only wanting 30 frames/sec.

I would add async=false to both udpsink and xvimagesink elements as that has some notice able latency improvements.

Thanks TRON. I modified the pipeline following your advice. Here is what I use to transmit the live video:

std::string playPipelineStr = “appsrc name=videoSrc stream-type=0 is-live=true caps=video/x-raw,format=BGR,width=640,height=480,framerate=30/1 ! videoconvert name=ffmpeg ! capsfilter caps=video/x-raw,format=I420,width=640,height=480,framerate=30/1 ! omxh264enc bitrate=3000000 control-rate=2 ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=x.x.x.x port=5200 sync=false async=false -vvv”;

I noticed that when the camera is static, the image quality is good, but as soon as I move the camera, the image looses a lot of resolution and becomes pixelated, like small blocks of pixels. Also, if I display the image locally, it’s of good quality. And again, it works well with the x264enc element, but I don’t want to use it since I need the CPU for other tasks.

Could there be other parameters that cause such result ?

Hello, fcan:
You can check the properties of h264enc by:

gst-inspect-1.0 omxh264enc

Some properties related to encoding quality: (I just listed some simple props, some other props may need more deep knowledge for video encoding)

  1. ‘control-rate’ and ‘bitrate’
    By default, ‘control-rate’ is ‘variable’. You can change that to ‘constant’, and increase prop ‘bitrate’ to get a better quality.

  2. ‘quality-level’
    By default, it’s set to ‘0’, which denotes the lowest quality. You can change that prop to ‘2’ to get a better quality.

BTW: compared with a static video, a quick-changing scene will definitely result in a lower video quality. That’s not only an encoder-tuning issue. And it may be related to sensor as well.

br
Chenjian

Hello jachen,
I changed the pipeline following your indications, sending the string:

std::string playPipelineStr = “appsrc name=videoSrc stream-type=0 is-live=true caps=video/x-raw,format=BGR,width=640,height=480,framerate=30/1 ! videoconvert name=ffmpeg ! capsfilter caps=video/x-raw,format=I420,width=640,height=480,framerate=30/1 ! omxh264enc bitrate=4000000 control-rate=2 quality-level=2 ! capsfilter caps=‘video/x-h264,stream-format=(string)byte-stream’ ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=”“10.7.163.128"” port=5200 sync=false async=false";

to gst_parse_launch(). But it did not make any difference. The image quality while moving appears abnormally low, comparatively to the image obtained with the software encoding provided with x264enc.

Could it be related to the way I push the images into the appsrc element, or some properties of appsrc, or some timing issues ?

To configure appsrc, I use:

mDataSize = 3widthheightsizeof(char); // BGR image format
guint64 maxSize = 3
mDataSize;
g_object_set(G_OBJECT(mpVideoSource), “blocksize”, mDataSize, NULL);
g_object_set(G_OBJECT(mpVideoSource), “max-bytes”, maxSize, NULL);

Images are send using:

void RTPTransmitter::sendToNetwork(unsigned char* IMG_data)
{
// Push the image in appsrc
GstFlowReturn ret;
GstMapInfo info;

guint buffersize = static_cast<guint>(mDataSize);

GstBuffer *buffer = gst_buffer_new_and_alloc(buffersize);

if(!buffer)
    throw std::runtime_error("Can't allocate buffer in sendToNetwork");

if (gst_buffer_map (buffer, &info, (GstMapFlags)GST_MAP_WRITE))
{
    memcpy(info.data, IMG_data, buffersize);
    gst_buffer_unmap (buffer, &info);
}
else
{
    gst_buffer_unref(buffer);
    throw std::runtime_error("Can't map buffer in sendToNetwork");
}

// mpVideoSource is the appsrc element
g_signal_emit_by_name (mpVideoSource, "push-buffer", buffer, &ret);

if (ret != GST_FLOW_OK)
{
    gst_buffer_unref(buffer);
    throw std::runtime_error("Can't push buffer in sendToNetwork");
}

gst_buffer_unref(buffer);

}

Alright. Using GStreamerBaseRenderImpl.cpp in the Visionworks examples, I found the problem. This has to do with the time stamping of the buffers pushed in appsrc. Here is the corrected function to push the buffers in appsrc, for a camera at 30FPS:

void RTPTransmitter::sendToNetwork(unsigned char* IMG_data)
{
// Push the image in appsrc
GstFlowReturn ret;
GstMapInfo info;

guint buffersize = static_cast<guint>(mDataSize);

GstClockTime timestamp = mNbFrames * 33000000;

GstBuffer * buffer = gst_buffer_new_allocate(NULL, buffersize, NULL);

if(!buffer)
    throw std::runtime_error("Can't allocate buffer in sendToNetwork");

if (gst_buffer_map (buffer, &info, (GstMapFlags)GST_MAP_WRITE))
{
    memcpy(info.data, IMG_data, buffersize);
    gst_buffer_unmap (buffer, &info);
}
else
{
    gst_buffer_unref(buffer);
    throw std::runtime_error("Can't map buffer in sendToNetwork");
}

GST_BUFFER_PTS(buffer) = timestamp;
if (!GST_BUFFER_PTS_IS_VALID(buffer))
    printf("Failed to setup PTS");

GST_BUFFER_DTS(buffer) = timestamp;
if (!GST_BUFFER_DTS_IS_VALID(buffer))
    printf("Failed to setup DTS");
    
GST_BUFFER_DURATION(buffer) = 33000000;
if (!GST_BUFFER_DURATION_IS_VALID(buffer))
    printf("Failed to setup duration\n");

GST_BUFFER_OFFSET(buffer) = mNbFrames;
if (!GST_BUFFER_OFFSET_IS_VALID(buffer))
    printf("Failed to setup offset");

g_signal_emit_by_name (mpVideoSource, "push-buffer", buffer, &ret);

if (ret != GST_FLOW_OK)
{
    gst_buffer_unref(buffer);
    throw std::runtime_error("Can't push buffer in sendToNetwork");
}

gst_buffer_unref(buffer);

mNbFrames++;

}

Thanks guys for your help.

Sorry for bringing this up again… but I have a similar problem which is likely also caused by some misconfiguration of the omxh264enc:

I am trying to encode a very low datarate videostream from a thermal camera (160x120px @ 9fps).
I am also using the gstreamer-1.0 omxh264enc element for that (because i cannot use CPU for load reasons) and my encoder and decoder pipelines in principle work…

…BUT,

I have some serious delay when starting the video playback. When I start the receiving side gstreamer pipeline it takes up to 20sec before the video shows up. Why is that?

My TX pipeline is:
gst-launch-1.0 videotestsrc ! video/x-raw,width=160,height=120,framerate=30/1 ! omxh264enc control-rate=1 target-bitrate=600000 ! h264parse config-interval=3 ! rtph264pay ! udpsink host=127.0.0.1 port=5600"

My RX pipeline is:
gst-launch-1.0 udpsrc -e port=5600 ! “application/x-rtp,media=video” ! rtph264depay ! h264parse ! queue ! avdec_h264 ! autovideosink sync=false

Thank you very much for your help & suggestions!
Thomas