Encoder on Jetson TK1?

Hi,

We are using gstreamer for encoding but struggling to get performance (on full HD, it shoots more than 100 ms per frame). Has someone tried encoding on it?

Any idea what we are missing or how can we do things in different way?
Thanks in advance.

There are examples in the documentation, have you tried those? What is the source of the video data? Are you running a longer test and calculating the average ms per frame? In some cases it might be worth to test if bumping up the clocks manually helps.

Gst-launch-0.10 is a good tool for testing different pipelines. Could you maybe post here one of the pipelines that you would like to use but what doesn’t provide the expected performance? A simplified pipeline would be even better, if it does the essential thing you are after but with still low performance.

Thanks kulve.

Our app: We receive RAW data from USB 2.0 camera, do some processing on this data, encode it in h.264 format and stream it out.

Pipeline which we are using is : fakesrc → nv_omx_h264enc->appsink. We have capsfilter between fakesrc and nv_omx_h264enc to specify input data configuration.
caps filter configuration are : format = YUV 420 , Resolution = 1920 x 1080 , fps = 30
YUV 420 buffer is given when callback(cb_handoff) registered by fakesrc is received. Buffer is encoded in h264 format and encoded data is received in callback registered by appsink.
The average time taken by whole pipeline(from fakesrc to appsink) is 100ms+.

At this point, we are not sure whether it’s a problem with the encoder or pipeline (mostly seems with the pipeline).

Hi sudhirbh,
A couple of things. First, I’m sure you know this already but you won’t be able to get 1920x1080 30fps raw YUV video out of a USB 2.0 camera, as that’s more bandwidth than USB 2.0 provides. Depends on the camera, but some provide video already encoded for high frame rates and frame sizes (MJPEG, H.264).

Second, it would be helpful if you provided the gst-launch command line that you are using, because it’s difficult to tell from your description if you are using any gstreamer queue elements or tee elements. In particular, the queue element can provide threading features that can speed up the pipeline greatly.

Third, I am curious as to what mechanism you’re using to measure the time. Are you measuring it from when the callbacks are called? If that’s the case, is it an average over a period of time or only a few calls?

Also there is a difference between latency and fps. Even if it takes 10 seconds for one frame to go from fakesrc through the encoder up to the appsink, it doesn’t mean the stream couldn’t still be steady 30fps.

So, are you interested in the latency or in the FPS?

Thanks kulve & kangalow.

we are not using it as command line, we have implemented c code and there are 2 different pipelines.

  1. v4l2src ! capsfilter (video/x-raw-yuv,width=1920,height=1080,framerate=30/1) ! appsink
    → Some processes done on captured RAW data which is stored in list and then we feed it to second pipeline.
  2. fakesrc ! caps (video/x-h264,width=1920,height=1080,format=(fourcc)I420,framerate=30/1) ! encoder (nv_omx_h264enc) ! appsink
    → We fetch data from the buffers filled from appsink. We measure the time from fakesrc to appsink and back to fakesrc.

In first pipeline, we are getting around 24 frames per second(raw data).
After feeding the frames into second pipeline, we are getting only 12 frames encoded out of 24 frames.
Here, we are losing 12 frames per second while encoding. Can we speed it up, is the question?

Thanks!