The delay difference of live video stream between gst-launch-1.0 command and appsink callback

Hi DaneLLL!I have done these command on TX1 without any problems.However how can I know whether TX1’s CPU has been maximized or not?

The reason the jetson_clocks.sh didn’t work directly is you need to give it execute permission first:

sudo chmod ugo+x jetson_clocks.sh

After that you should be able to execute it directly (copy from the other computer didn’t copy permissions).

I do wonder why, if jetson_clocks.sh was in the PC host’s “rootfs/home/ubuntu/”, this did not already exist on the Jetson. If anything in that rootfs on host does not appear on the Jetson, then I’d say flash never took place using that rootfs.

Hi linuxdev!Yes,you are right!I was able to execute “jetson_clocks.sh” without any errors.Thank you very much!You are so clever!

Hello Sulli,
I encountered the same problem with you.This is my code.

#include <gst/gst.h>
    #include <gst/app/gstappsink.h>
    #include <stdlib.h>
    //#include<QTime>
    //#include<QDebug>

    #include "opencv2/opencv.hpp"

    using namespace cv;

    #define CAPS "video/x-raw,format=BGR,fFrameRate=25/1,width=1280,height=720"

    // TODO: use synchronized deque
    GMainLoop *loop;
    std::deque<Mat> frameQueue;
    int live_flag = 0;
    int quit_flag = 0;
    //int sum_time=0;

    GstFlowReturn new_preroll(GstAppSink *appsink, gpointer data)
    {
        g_print ("Got preroll!\n");
        return GST_FLOW_OK;
    }

    GstFlowReturn new_sample(GstAppSink *appsink, gpointer data)
    {
        //QTime time;
        //time.start();
        static int framecount = 0;
        framecount++;

        static int width=0, height=0 ;

        GstSample *sample = gst_app_sink_pull_sample(appsink);
        GstCaps *caps = gst_sample_get_caps(sample);
        GstBuffer *buffer = gst_sample_get_buffer(sample);
        static GstStructure *s;
        const GstStructure *info = gst_sample_get_info(sample);
        // ---- get width and height
        if(framecount==1)
        {
            if(!caps)
            {
                g_print("Could not get image info from filter caps");
                exit(-11);
            }

            s = gst_caps_get_structure(caps,0);
            gboolean res = gst_structure_get_int(s, "width", &width);
            res |= gst_structure_get_int(s, "height", &height);
            if(!res)
            {
                g_print("Could not get image width and height from filter caps");
                exit(-12);
            }
            g_print("Image size: %d\t%d\n",width,height);
        }


        // ---- Read frame and convert to opencv format ---------------
        GstMapInfo map;
        gst_buffer_map (buffer, &map, GST_MAP_READ);

        // convert gstreamer data to OpenCV Mat, you could actually
        // resolve height / width from caps...

        Mat frame(Size(width, height), CV_8UC3, (char*)map.data, Mat::AUTO_STEP);

            // this lags pretty badly even when grabbing frames from webcam
            //Mat edges;
            //cvtColor(frame, edges, CV_RGB2GRAY);
            //GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
            //Canny(edges, edges, 0, 30, 3);
            imshow("stream", frame);

            //char key = cv::waitKey(10);
            //if(key!=-1) quit_flag = 1;


        gst_buffer_unmap(buffer, &map);

        // ------------------------------------------------------------

        // print dot every 30 frames
        if (framecount%30 == 0) {
        g_print (".");
        }

        // show caps on first frame
        if (framecount == 1) {
        g_print ("%s\n", gst_caps_to_string(caps));
        }

        gst_sample_unref (sample);
        //sum_time =time.elapsed();
        //qDebug()<<"time:"<<sum_time<<"\n";
        return GST_FLOW_OK;
    }

    static gboolean my_bus_callback (GstBus *bus, GstMessage *message, gpointer data)
    {
        g_print ("Got %s message from %s\n", GST_MESSAGE_TYPE_NAME (message), GST_OBJECT_NAME (message->src));
        switch (GST_MESSAGE_TYPE (message))
        {
                case GST_MESSAGE_ERROR:
                {
                        GError *err;
                        gchar *debug;

                        gst_message_parse_error (message, &err, &debug);
                        g_print ("Error from %s: %s\n", GST_OBJECT_NAME (message->src), err->message);
                        g_error_free (err);
                        g_free (debug);
                        break;
                }
                case GST_MESSAGE_EOS:
                        /* end-of-stream */
                        quit_flag = 1;
                        break;
                case GST_MESSAGE_STATE_CHANGED:
                        GstState oldstate, newstate;
                        gst_message_parse_state_changed(message, &oldstate, &newstate, NULL);
                        g_print ("Element %s changed state from %s to %s.\n",
                        GST_OBJECT_NAME (message->src),
                                gst_element_state_get_name (oldstate),
                                gst_element_state_get_name (newstate));
                        break;
                default:
                        /* unhandled message */
                        break;
        }
        /* we want to be notified again the next time there is a message
        * on the bus, so returning TRUE (FALSE means we want to stop watching
        * for messages on the bus and our callback should not be called again)
        */
        return TRUE;
    }

    int main (int argc, char *argv[])
    {
        GError *error = NULL;

        GstElement *pipeline, *sink;
        GstStateChangeReturn state_ret;

        GstSample *sample;

        gst_init (&argc, &argv);

            gchar *descr = g_strdup(
            "rtspsrc location=rtsp://admin:admin12345@192.168.0.64:554/Streaming/Channels/101?transportmode=unicast&profile=Profile_1 protocols=tcp latency=0 !"
            "decodebin ! "
            "nvvidconv ! "
	    "videoconvert ! "
	    "videoscale ! "
            "appsink name=sink caps=video/x-raw,format=BGR,width=1920,height=1080,fFrameRate=25/1 sync=false"
        );


        pipeline = gst_parse_launch (descr, &error);

        if (error != NULL)
        {
            g_print ("could not construct pipeline: %s\n", error->message);
            g_error_free (error);
            exit (-1);
        }

        /* get sink */
        sink = gst_bin_get_by_name (GST_BIN (pipeline), "sink");

        /*set to pause*/
        state_ret = gst_element_set_state(pipeline, GST_STATE_PAUSED);

        switch(state_ret)
        {
            case GST_STATE_CHANGE_FAILURE:
                g_print ("failed to play the file\n");
                exit (-2);
            case GST_STATE_CHANGE_NO_PREROLL:
                /* for live sources, we need to set the pipeline to PLAYING before we can
                * receive a buffer. */
                g_print ("live source detected\n");
                live_flag = 1;
                break;
            default:
                break;
        }

        gst_app_sink_set_emit_signals((GstAppSink*)sink, true);
        gst_app_sink_set_drop((GstAppSink*)sink, true);
        gst_app_sink_set_max_buffers((GstAppSink*)sink, 1);
        GstAppSinkCallbacks callbacks = { NULL, new_preroll, new_sample };
        gst_app_sink_set_callbacks (GST_APP_SINK(sink), &callbacks, NULL, NULL);

        GstBus *bus;
        guint bus_watch_id;
        bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
        bus_watch_id = gst_bus_add_watch (bus, my_bus_callback, NULL);
        gst_object_unref (bus);

            gst_element_set_state (GST_ELEMENT (pipeline), GST_STATE_PLAYING);

        namedWindow("stream",1);

        loop = g_main_loop_new(NULL,false);
            g_main_loop_run(loop);

        cv::destroyWindow("stream");
        g_print ("Going to end of main!\n");
        gst_element_set_state (GST_ELEMENT (pipeline), GST_STATE_NULL);
        gst_object_unref (GST_OBJECT (pipeline));

        return 0;
    }

It works well,when I excude the followed code on jetson-tx1 terminal.

nvidia@tegra-ubuntu:~$ gst-launch-1.0 rtspsrc location=rtsp://admin:admin12345@192.168.0.64:554/Streaming/Channels/101?transportmode=unicastprofile=Profile_1 protocols=tcp latency=0 ! decodebin ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:admin12345@192.168.0.64:554/Streaming/Channels/101?transportmode=unicastprofile=Profile_1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingNvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7818: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1190: BeginSequence  1920x1088, bVPR = 0
TVMR: LowCorner Frequency = 180000 
TVMR: cbBeginSequence: 1583: DecodeBuffers = 5, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1654: Display Resolution : (1920x1080) 
TVMR: cbBeginSequence: 1655: Display Aspect Ratio : (1920x1080) 
TVMR: cbBeginSequence: 1697: ColorFormat : 5 
TVMR: cbBeginSequence:1702 ColorSpace = NvColorSpace_YCbCr709_ER
TVMR: cbBeginSequence: 1839: SurfaceLayout = 3
TVMR: cbBeginSequence: 1936: NumOfSurfaces = 9, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 1938: BeginSequence  ColorPrimaries = 1, TransferCharacteristics = 1, MatrixCoefficients = 1
Allocating new output: 1920x1088 (x 9), ThumbnailMode = 0
TVMR: FrameRate = 25 
TVMR: NVDEC LowCorner Freq = (150000 * 1024) 
---> TVMR: Video-conferencing detected !!!!!!!!!
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:39.169351505
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
TVMR: cbDisplayPicture: 3889: Retunred NULL Frame Buffer 
TVMR: TVMRFrameStatusReporting: 6266: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6084: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6116: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8018: Done 
Setting pipeline to NULL ...
Freeing pipeline ...

Could you give me some suggestions? Thanks for you in advance.

Hello,DaneLL
I run the gst-launch-1.0 command on jetson-tx1 terminal.

nvidia@tegra-ubuntu:~$ gst-launch-1.0 rtspsrc location=rtsp://admin:admin12345@192.168.0.64:554/Streaming/Channels/101?transportmode=unicastprofile=Profile_1 protocols=tcp latency=0 ! decodebin ! videoconvert ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:admin12345@192.168.0.64:554/Streaming/Channels/101?transportmode=unicastprofile=Profile_1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingNvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7818: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1190: BeginSequence  1920x1088, bVPR = 0
TVMR: LowCorner Frequency = 180000 
TVMR: cbBeginSequence: 1583: DecodeBuffers = 5, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1654: Display Resolution : (1920x1080) 
TVMR: cbBeginSequence: 1655: Display Aspect Ratio : (1920x1080) 
TVMR: cbBeginSequence: 1697: ColorFormat : 5 
TVMR: cbBeginSequence:1702 ColorSpace = NvColorSpace_YCbCr709_ER
TVMR: cbBeginSequence: 1839: SurfaceLayout = 3
TVMR: cbBeginSequence: 1936: NumOfSurfaces = 9, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 1938: BeginSequence  ColorPrimaries = 1, TransferCharacteristics = 1, MatrixCoefficients = 1
Allocating new output: 1920x1088 (x 9), ThumbnailMode = 0
TVMR: FrameRate = 25 
TVMR: NVDEC LowCorner Freq = (150000 * 1024) 
---> TVMR: Video-conferencing detected !!!!!!!!!
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
TVMR: FrameRate = 25.000000 
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:20.335878999
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
TVMR: cbDisplayPicture: 3889: Retunred NULL Frame Buffer 
TVMR: TVMRFrameStatusReporting: 6266: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6084: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6116: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8018: Done 
Setting pipeline to NULL ...
Freeing pipeline ...

The video isn’t delayed, but Caton.I see the FrameRate = 25.00000,but only feel the video about 10fps.I want to get the real-time video of my IP camera.How to solve it? Could you give some suggestion? Thanks you in advance.

Should have clarified in [url]https://devtalk.nvidia.com/default/topic/1011376/jetson-tx1/gstreamer-decode-live-video-stream-with-the-delay-difference-between-gst-launch-1-0-command-and-appsink-callback/post/5160929/#5160929[/url]