I want to use tegra_multimedia_api to do h264 hardware encode /decode, our TARGET is "decode 4k * 4 rtsp streaming -> CV computing -> rtsp encode 4k streaming *4 to output",
on this goal, we want to reserve GPU for CV computing, so we want to leverage codec to help decode & encode.
But while we test the sample on TX2, we found there is some spots on the result and don’t know how to remove it? My questions are
1. how to use tegra_multimedia_api to eanble hardware encode/decode capability on h264 with correct result output?
2. from docs, we know nvidia has gst omx and tegra_multimedia_api, what's different? which one's performance is better?
3. on TX2, could we achieve our goal?
Double confirm it, on spec, I could get H264 decode 3840x2160@60fps + H264 encode 3840x2160@30fps at the same time.
Is it correct?
2, "how to use tegra_multimedia_api to eanble hardware encode/decode capability on h264 with correct result output?"
3. omx and tegra_multimedia_api, what's different? which one's performance is better?
After flashing through sdkmanager, you will see samples in
You may start with 00_video_decode and 01_video_encode
We support tegra_multimedia_api and gstreamer. It is two different software frameworks and users can pick one for application development. gstreamer is easy to use because there are existing plugins for various usecases. For example, if you need rtsp streaming, you can use rtspsrc and rtph264depay. tegra_multimedia_api is low level and may have slightly better performance, but you need to implement/integrate all functions.
As we tested tegra_multimedia_api/encode and decode function and found some spots on preview.
And don’t know how to fix.
Here are our test steps.
encode a yuv file to a h264 file
nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ ./video_encode ~/tegra_multimedia_api/samples/01_video_encode/output_640_360_debug.yuv 640 360 H264 u1_yuv.h264 -hpt 1
decode the h264 file and find unexpected spots
nvidia@tegra-ubuntu:~/tegra_multimedia_api/samples/00_video_decode$ ./video_decode H264 u1_yuv.h264
Does the encode /decode commands we used correct?
If not correct, how to use?
I have a question about how to integrated decode API to our APP, on MMAPI sample, video_decode_main.cpp
it provides to get all data from files and then decode them at once. but I got the H264 streaming from our system, and I only want to decode the H264 streaming frame by frame, not get all the data and decode them at the same time. Do you have any good example on this case?
on 00_video_decode example, this dec_capture_loop_fcn thread will wait for read file result to update resolution to decoder, but my input is not file, I get the H264 buffer from our machine, so I use H264 frame buffer to replace read buffer from file, but it seems like many other parameter depend on read file, except resolution, crop and ....
Do you have any idea how to fulfill these parameter without open file?
cout << "Starting decoder capture loop thread" << endl;
// Need to wait for the first Resolution change event, so that
// the decoder knows the stream resolution and can allocate appropriate
// buffers when we call REQBUFS
ret = dec->dqEvent(ev, 50000);
if (ret < 0)
if (errno == EAGAIN)
"Timed out waiting for first V4L2_EVENT_RESOLUTION_CHANGE"
cerr << "Error in dequeueing decoder event" << endl;
while ((ev.type != V4L2_EVENT_RESOLUTION_CHANGE) && !ctx->got_error);