It looks like the h.264 encoding example (Multimedia UG p.9) takes as much as 80% of ARM cluster CPU usage !
gst-launch-0.10 videotestsrc ! ‘video/x-raw-yuv, width=(int)1280,height=(int)720, format=(fourcc)I420’ ! nv_omx_h264enc ! qtmux ! filesink location=test.mp4 -v -e
The same is true for the VP8 encoding example . One would guess that HW based video encoding doesn’t work ? Am I missing something here ?
Have you tried limiting the FPS? The above pipeline might try to push as high FPS as possible, and if that’s the case, then the CPU load will of course be higher that in normal 30-60fps case.
I think you could try specifying framerate in the caps before h264 encoder and also setting “is-live” to true for videotestsrc. Or to test with a real camera.
I’ve tryed this:
gst-launch-0.10 filesrc location=sintel_trailer-720p.mp4 ! qtdemux name=mux ! nv_omx_h264dec ! ‘video/x-nv-yuv’ ! nvvidconv ! nv_omx_vp8enc ! webmmux ! filesink location=sintel.webm -v -e
It takes 77 seconds on TK1 to do it. Video length is 52 seconds. Two ARM cores are used for the job and utilised at 60-70% . This takes just to long and uses to much CPU to conclude that it was done via HW.
The decoder (nv_omx_h264dec) and the encoder (nv_omx_vp8enc) probably don’t have SW implementations at all, so if you do get correct output, then it was done on HW. But if it takes too much CPU, then something else is done with the CPU.
I had performance problems with my USB camera to h264 conversion and it got much better when I realized that I don’t need to do any extra conversion with nvvidconv.