Problem with using openmax in tegra k1 (driver R19.3)

Cannot enable ports of h264 encoder component after setting component’s state to idle.

Steps:
OMX_init();
OMX_gethandle(…);
//disable all ports

//set parames

//switch component state to idle

OMX_SendCommand( …, xxxPortEnable, 0, NULL);
//check the state of port 0, which is always disabled.

Anyone knows why ?
Thanks

there is currently rather limited information for OMX on Jetson TK1

it would be helpful to have more detail, I have a little experience with OMX on RPi.

and maybe add tutorial here:
http://elinux.org/Jetson_TK1#Tutorials_for_developing_with_Jetson_TK1

added links on openGL there…

Thank you very much!
I wrote my test code following [url]http://solitudo.net/software/raspberrypi/rpi-openmax-demos/rpi-encode-yuv.c/[/url]. However, it always gets stuck at 'block_until_port_changed(ctx.encoder, 0, OMX_TRUE);
'. In my opinion, the only possible cause of this problem is not using ‘bcm_host_init();’. I cannot find this function or something similar in tegra k1. So this function is not used in my test code.

bcm_* functions are Broadcom specific and have nothing to do with OMX.

The OMX API on Tegra is badly documented because the GStreamer API is the officially supported multimedia API on Tegra. I recommend using that.

Thanks!
I know little about GStreamer. The filesrc component seems sending a frame once getting a frame from local file. That is not what I want. I need a file source component that has all frames loaded at the beginning, and continues sending frames once started. So, I have to create and register a new component. T-T .Am I right ? Thanks

I’m not sure if I understand your goal. You want something to read a whole file in the memory and then starts sending frames one at a time?

Anyway, with GStreamer there are a lot of different sources and sinks and then another plugins that does everything in between. E.g. you could try using a file sink that reads data and pipe that to a queue plugin. You can then tune the queue to your liking.

One way is to use the gstreamer AppSrc element, and inject the data from your own application. Then you can implement your own buffering scheme if necessary.

There are some gstreamer examples here [url]http://elinux.org/Jetson/H264_Codec[/url] and also in the L4T_Jetson_TK1_Multimedia_User_Guide.pdf included with the Jetson docs. Like kulve, I do not recommend using omx directly.

Thank you !
I want to test out the maximum framerate of hardware h264 encoding on jetson k1. So, if using Gstremer, I need a component which can send out I420 frames continuesly without taking any millisecond. As far as I know, a component called ‘nvvidconv’ must be inserted in front of nv_omx_h264enc. It converts ‘video/x-raw-yuv’ to ‘video/x-nv-yuv’, and usually takes about 30% of CPU in the case of 1080P. This component takes 12.6 seconds to convert 300 video frames at 1920x1088.

I think you can use “videotestsrc” as the source sink for the tests. Set “is-live=true”, if you don’t want it to use the maximum speed.

At least with my webcam I don’t need to use nvvidconv anymore (I had to on Tegra3, afaik). Removing that extra conversion improved the results significantly.

You can also use “fakesink” to discard the frames as outputing e.g. to a file on a disk is slow (even though disk cache helps a lot).

Something like this:

gst-launch-0.10 -v videotestsrc ! capsfilter caps=‘video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480’ ! nv_omx_h264enc ! fakesink

But I don’t know how to get the FPS count from that…

Do also note that the encoder has multiple parameters that probably affect the time taken for the encoding.

Capture device has its own maximum framerate, E.g.60f/s, while the encoder may be able to encode 200 frames per second. So I’m using filesrc instead.

I’m sorry! You are right! I have tried videotestsrc and succeed. However, it takes 22.67 seconds to encode 300 frames of video at 1920x1080.
Thanks a lot!

My cmd line is :
gst-launch-0.10 videotestsrc num_buffers=300 ! ‘video/x-raw-yuv, width=(int)1920, height=(int)1080, framerate=(fraction)60/1’ nv_omx_h264enc ! fakesink

I’m not sure if the num_buffers actually matches complete frames.

Maybe you should take any video (so that you know the frame count) and then decode it and dump the raw YUV frames to a file. Then you probably can read the raw YUV from the file with filesrc and measure how long it takes to encode it to a fakesink.

You can put the file to a tmpfs so that it’s read from RAM instead of from the slow real disk/flash although I’m not sure if the flash read performance really affects anything on that test.

re: for instance using with CUDA or openGL the wiki suggest:

"Using the gstreamer-defined appsrc and appsink elements, it’s possible to efficiently send application data into a local gstreamer pipeline running in the application’s userspace.

For example using appsrc a CUDA video processing application could send it’s image buffers into gstreamer to be encoded, and then retrieve the H.264-encoded data from gstreamer using appsink.

Code sample using nv_omx_h264enc/nv_omx_h264dec coming soon"

and that was some time ago…

appsink/appsrc works with any pipeline so for documentation check e.g.:

http://www.freedesktop.org/software/gstreamer-sdk/data/docs/2012.5/gst-plugins-base-libs-0.10/gst-plugins-base-libs-appsink.html

http://www.freedesktop.org/software/gstreamer-sdk/data/docs/latest/gst-plugins-base-libs-0.10/gst-plugins-base-libs-appsrc.html

http://docs.gstreamer.com/display/GstSDK/Basic+tutorial+8%3A+Short-cutting+the+pipeline

Admittedly, it took me some time learning libgstreamer and getting it to run.

/*
 * gstEncoder
 */

#include "gstEncoder.h"

#include "sysTime.h"
#include "sysXML.h"

#include <gst/gst.h>
#include <gst/app/gstappsrc.h>


// gst_message_print
static gboolean gst_message_print(GstBus* bus, GstMessage* message, gpointer user_data)
{
	printf(LOG_HYDRA "gstreamer pipeline msg %s\n", gst_message_type_get_name(GST_MESSAGE_TYPE(message)));
 
	switch (GST_MESSAGE_TYPE (message)) 
	{
		case GST_MESSAGE_ERROR: 
		{
			GError *err = NULL;
			gchar *dbg_info = NULL;
 
			gst_message_parse_error (message, &err, &dbg_info);
			printf(LOG_HYDRA "gstreamer ERROR from element %s: %s\n", GST_OBJECT_NAME (message->src), err->message);
        		printf(LOG_HYDRA "gstreamer Debugging info: %s\n", (dbg_info) ? dbg_info : "none");
        
			g_error_free(err);
        		g_free(dbg_info);
			//g_main_loop_quit (app->loop);
        		break;
		}
		case GST_MESSAGE_EOS:
		{
			printf(LOG_HYDRA "gstreamer recieved EOS signal...\n");
			//g_main_loop_quit (app->loop);		// TODO trigger plugin Close() upon error
			break;
		}
		default:
			break;
	}

	return TRUE;
}


// constructor
gstEncoder::gstEncoder()
{	
	mAppSrc     = NULL;
	mBus        = NULL;
	mBufferCaps = NULL;
	mPipeline   = NULL;
	mNeedData   = false;

	AutoThread();		// TODO see if encoder can run in primary rendering thread, saving CPU
}


// destructor	
gstEncoder::~gstEncoder()
{
	
}


// onNeed
void gstEncoder::onNeed(GstElement * pipeline, guint size, gpointer user_data)
{
	//printf(LOG_HYDRA "gstreamer appsrc requesting data (%u bytes)\n", size);
	
	if( !user_data )
		return;

	gstEncoder* enc = (gstEncoder*)user_data;
	enc->mNeedData  = true;
}
 

// onEnough
void gstEncoder::onEnough(GstElement * pipeline, gpointer user_data)
{
	printf(LOG_HYDRA "gstreamer appsrc signalling enough data\n");

	if( !user_data )
		return;

	gstEncoder* enc = (gstEncoder*)user_data;
	enc->mNeedData  = false;
}


// ProcessBuffer
bool gstEncoder::ProcessBuffer( Buffer* buffer )
{
	if( !buffer )
		return false;

	if( !mNeedData )
	{
		buffer->Release();
		return true;
	}

	// convert hydra buffer to GstBuffer
	GstBuffer* gstBuffer = gst_buffer_new();

	const size_t size = buffer->GetSize();
	
	GST_BUFFER_MALLOCDATA(gstBuffer) = (guint8*)g_malloc(size);
	GST_BUFFER_DATA(gstBuffer) = GST_BUFFER_MALLOCDATA(gstBuffer);
	GST_BUFFER_SIZE(gstBuffer) = size;

	//static size_t num_frame = 0;
	//GST_BUFFER_TIMESTAMP(gstBuffer) = (GstClockTime)((num_frame / 30.0) * 1e9);
	//num_frame++;

	if( mBufferCaps != NULL )
		gst_buffer_set_caps(gstBuffer, mBufferCaps);

	memcpy(GST_BUFFER_DATA(gstBuffer), buffer->GetCPU(), size);

	buffer->Release();

	// queue buffer to gstreamer
	//GstFlowReturn ret = gst_app_src_push_buffer(GST_APP_SRC(mAppSrc), gstBuffer);
	GstFlowReturn ret;	
	g_signal_emit_by_name(mAppSrc, "push-buffer", gstBuffer, &ret);
    gst_buffer_unref(gstBuffer);

	if( ret != 0 )
		printf(LOG_HYDRA "gstreamer -- AppSrc pushed buffer (result %u)\n", ret);

	return true;
}


// ProcessEmit
void gstEncoder::ProcessEmit()
{
	while(true)
	{
		GstMessage* msg = gst_bus_pop(mBus);

		if( !msg )
			break;

		gst_message_print(mBus, msg, this);
		gst_message_unref(msg);
	}
}

//#define CAPS_STR "video/x-raw-rgb,width=640,height=480,bpp=24,depth=24"
//#define CAPS_STR "video/x-raw-yuv,width=640,height=480,format=(fourcc)I420"
  #define CAPS_STR "video/x-raw-yuv,width=1280,height=1024,format=(fourcc)I420,framerate=30/1"
//#define CAPS_STR "video/x-raw-gray,width=640,height=480,bpp=8,depth=8,framerate=30/1"

#define GST_LAUNCH_FROM_STRING

// Open
bool gstEncoder::Open()
{
	printf(LOG_HYDRA "gstEncoder::Open()\n");

	// parse pipeline
	const char* launchStr = "appsrc name=mysource ! " CAPS_STR " ! "
					    //"nvvidconv ! nv_omx_h264enc quality-level=2 ! "
					    "nv_omx_h264enc ! "
					    "video/x-h264 ! matroskamux ! queue ! "
				         "filesink location=/media/ubuntu/SDU11/test.mkv";

	GError* err = NULL;

	mPipeline = gst_parse_launch(launchStr, &err);

	if( err != NULL )
	{
		printf(LOG_HYDRA "gstreamer failed to create pipeline\n");
		printf(LOG_HYDRA "   (%s)\n", err->message);
		g_error_free(err);
		return false;
	}

	GstPipeline* pipeline = GST_PIPELINE(mPipeline);

	if( !pipeline )
	{
		printf(LOG_HYDRA "gstreamer failed to cast GstElement into GstPipeline\n");
		return false;
	}	

	// retrieve pipeline bus
	/*GstBus**/ mBus = gst_pipeline_get_bus(pipeline);

	if( !mBus )
	{
		printf(LOG_HYDRA "gstreamer failed to retrieve GstBus from pipeline\n");
		return false;
	}

	// add watch for messages (disabled when we poll the bus ourselves, instead of gmainloop)
	//gst_bus_add_watch(mBus, (GstBusFunc)gst_message_print, NULL);

	// get the appsrc
	GstElement* appsrcElement = gst_bin_get_by_name(GST_BIN(pipeline), "mysource");
	GstAppSrc* appsrc = GST_APP_SRC(appsrcElement);

	if( !appsrcElement || !appsrc )
	{
		printf(LOG_HYDRA "gstreamer failed to retrieve AppSrc element from pipeline\n");
		return false;
	}
	
	mAppSrc = appsrcElement;

	g_signal_connect(appsrcElement, "need-data", G_CALLBACK(onNeed), this);
	g_signal_connect(appsrcElement, "enough-data", G_CALLBACK(onEnough), this);

	/*GstCaps* caps = gst_caps_new_simple("video/x-raw-rgb",
								 "bpp",G_TYPE_INT,24,
								 "depth",G_TYPE_INT,24,
								 "width", G_TYPE_INT, 640,
								 "height", G_TYPE_INT, 480,
								 NULL);*/
	mBufferCaps = gst_caps_from_string(CAPS_STR);

	if( !mBufferCaps )
	{
		printf(LOG_HYDRA "gstreamer failed to parse caps from string\n");
		return false;
	}

	gst_app_src_set_caps(appsrc, mBufferCaps);
	//gst_app_src_set_size(appsrc, 640*480*10);
	//gst_app_src_set_max_bytes(appsrc, 640*480*20);
	gst_app_src_set_stream_type(appsrc, GST_APP_STREAM_TYPE_STREAM);
	//gst_app_src_set_latency(appsrc, 1, 20);

	//g_object_set(G_OBJECT(m_pAppSrc), "caps", m_pCaps, NULL); 
	//g_object_set(G_OBJECT(mAppSrc), "is-live", TRUE, NULL); 
	//g_object_set(G_OBJECT(mAppSrc), "block", FALSE, NULL); 
       g_object_set(G_OBJECT(mAppSrc), "do-timestamp", TRUE, NULL); 

	/*typedef enum {
	  GST_STATE_CHANGE_FAILURE             = 0,
	  GST_STATE_CHANGE_SUCCESS             = 1,
	  GST_STATE_CHANGE_ASYNC               = 2,
	  GST_STATE_CHANGE_NO_PREROLL          = 3
	} GstStateChangeReturn;*/


	printf(LOG_HYDRA "gstreamer transitioning pipeline to GST_STATE_PLAYING\n");
	const GstStateChangeReturn result = gst_element_set_state(mPipeline, GST_STATE_PLAYING);

	if( result == GST_STATE_CHANGE_ASYNC )
	{
#if 0
		GstMessage* asyncMsg = gst_bus_timed_pop_filtered(mBus, 5 * GST_SECOND, 
    	 					      (GstMessageType)(GST_MESSAGE_ASYNC_DONE|GST_MESSAGE_ERROR)); 

		if( asyncMsg != NULL )
		{
			gst_message_print(mBus, asyncMsg, this);
			gst_message_unref(asyncMsg);
		}
		else
			printf(LOG_HYDRA "gstreamer NULL message after transitioning pipeline to PLAYING...\n");
#endif
	}
	else if( result != GST_STATE_CHANGE_SUCCESS )
	{
		printf(LOG_HYDRA "gstreamer failed to set pipeline state to PLAYING (error %u)\n", result);
		return false;
	}

	return Node::Open();
}
	


// Close
bool gstEncoder::Close()
{
	// send EOS
	mNeedData = false;
	
	printf(LOG_HYDRA "gstreamer sending encoder EOS\n");
	GstFlowReturn eos_result = gst_app_src_end_of_stream(GST_APP_SRC(mAppSrc));

	if( eos_result != 0 )
		printf(LOG_HYDRA "gstreamer failed sending appsrc EOS (result %u)\n", eos_result);

	sysSleepMs(250);

	// stop pipeline
	printf(LOG_HYDRA "gstreamer transitioning pipeline to GST_STATE_NULL\n");

	const GstStateChangeReturn result = gst_element_set_state(mPipeline, GST_STATE_NULL);

	if( result != GST_STATE_CHANGE_SUCCESS )
		printf(LOG_HYDRA "gstreamer failed to set pipeline state to PLAYING (error %u)\n", result);

	sysSleepMs(250);

	// stop node and polling thread
	if( !Node::Close() )
		return false;

	return true;
}

Originally I tried using nvvidconv for the YUV-I420 conversion required by nv_omx_h264enc, but it was blowing up my pipeline with “internal flow error” so I used NPP to do the colorspace conversion instead. Also there was an issue where the gstreamer pipeline likes to run inside a gmainloop, but this was to be integrated into my application with an already existing main loop, so the pipeline’s bus needed popped routinely.

Dustin,

could you provide a description of intended usage?

thanks again!

Jonathan

For now you can make your own class similar to the one in the example. Eventually I will release the rest of the project. Here are some notes about the functions:

gstEncoder::Open() creates the pipeline and changes it to the playing state.
gstEncoder::ProcessBuffer() takes in an external buffer and pushes it to the H.264 encoder using appsrc.

the onNeed and onEnough static callbacks are used by appsrc to signal when it’s ok to push data.

The gst_bus_pop() loop in gstEncoder::ProcessEmit() is meant to be called regularly from your application’s main loop.

Normally you shouldn’t export the library paths like that. Nor should you explicitly use the paths in the first place.

Creating a Makefile may sound overkill when you want to do something really simple but it’s much more convenient to write just “make” and not to try remember to set the library paths every time.

For a simple Makefile, check e.g.: