How to continue tracking after changing the source

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson ORIN-NX
• DeepStream Version 6.2 (deepstream-l4t:6.2-triton docker image)
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2

Hello everyone,

I’m building an application where I run inference and tracking on multiple videos by changing sources when the current ones are done as recommended here. Whenever source changes, new tracks are instantiated, however, I want to continue current tracks after changing sources. I’m already using videos that are recorded sequentially so the last frame from the first video and the first frame from the second video are consecutive frames in time. So there is no issue expecting the same tracks in the second video.

The pipeline remains playing during the switch of the sources so tracker and nvinfer remain the same state.

I couldn’t find a reason to instantiate new tracks when changing the sources since everything remains the same in the tracker. I think it checks a variable in the metadata to decide if the frame is from the same or a new source.

How can I make it to continue tracking the same tracks after changing sources?

Can you share more details for how you change the sources?

Hi kesong,

I copy-pasted code from runtime src add delete app.
I added a probe to the decodebin to drop the end-of-stream event.

    g_print("decodebin new pad %s\n", name);
    if (!strncmp(name, "video", 5))
        gint source_id = src_ctx->index;
        gchar pad_name[16] = {0};
        GstPad *sinkpad = NULL;
        g_snprintf(pad_name, 15, "sink_%u", source_id);
        sinkpad = gst_element_get_request_pad(streammux, pad_name);
        if (gst_pad_link(pad, sinkpad) != GST_PAD_LINK_OK)
            g_print("Failed to link decodebin to pipeline\n");
            g_print("Decodebin linked to pipeline\n");

        gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM, eos_probe_cb, data, NULL);

In another thread, I wait until the inference is done to remove old sources and add the new sources as recommended in runtime src add delete app. The while loop in that thread:

    while (!end_loop)
        for (size_t i = 0; i < deepstream_inference->m_config.input_video_path_list.size(); ++i)
            while (!deepstream_inference->m_src_ctx_list[i].finished)
                pthread_cond_wait(&deepstream_inference->m_src_ctx_list[i].finished_cond, &deepstream_inference->m_src_ctx_list[i].finished_mutex);

            deepstream_inference->m_src_ctx_list[i].finished = false;
            deepstream_inference->m_src_ctx_list[i].frame_number = 0;

            deepstream_inference->m_src_ctx_list[i].output_file = NULL;

        if (deepstream_inference->m_config.video_output)

        for (size_t i = 0; i < deepstream_inference->m_config.input_video_path_list.size(); ++i)
            g_print("enter src: %ld\n", i);
            char src_str[1000];
            scanf("%s", src_str);
            if (!strncmp(src_str, "exit", 4))
                end_loop = true;
            deepstream_inference->m_src_ctx_list[i].input_file_name = std::string(src_str);

            std::string output_file_name = deepstream_inference->m_config.output_path + "/" + extract_file_name(deepstream_inference->m_src_ctx_list[i].input_file_name) + ".csv";
            deepstream_inference->m_src_ctx_list[i].output_file_name = output_file_name;

        if (end_loop)

        for (size_t i = 0; i < deepstream_inference->m_config.input_video_path_list.size(); ++i)
            add_source(deepstream_inference->m_pipeline, &deepstream_inference->m_src_ctx_list[i]);

            /* Set state of the new source bin to playing */
            state_return = gst_element_set_state(deepstream_inference->m_src_ctx_list[i].source_bin, GST_STATE_PLAYING);
            switch (state_return)
            case GST_STATE_CHANGE_SUCCESS:
                g_print("STATE CHANGE SUCCESS\n\n");
            case GST_STATE_CHANGE_FAILURE:
                g_print("STATE CHANGE FAILURE\n\n");
            case GST_STATE_CHANGE_ASYNC:
                g_print("STATE CHANGE ASYNC\n\n");
                state_return = gst_element_get_state(deepstream_inference->m_src_ctx_list[i].source_bin, NULL, NULL, GST_CLOCK_TIME_NONE);
                g_print("STATE CHANGE NO PREROLL\n\n");


I sometimes use 1 src sometimes 2, I don’t think it differs.

What kind of file are you recorded? Is it possible to merge those files before video decoder?

The thing is, frames are coming from a live source and I’m saving videos from live source as little chunks eg. 1 min and doing inference on those videos. I cannot combine live source and inference pipeline since the inference is not as fast as live source.

There is a field called streamID in nvdstracker.h. Probably it is updated when I change the source but how? I don’t have access to the codebase of low level tracker library can you check? Then I can manipulate the buffers accordingly to keep the streamID remains same.

NvMOT_RemoveStreams method has been called when I change the source. How it knows source is changed?

One option is set interval in PGIE.

streamID is based on source_id of nvstreammux.

    g_snprintf(pad_name, 15, "sink_%u", source_id);
    sinkpad = gst_element_get_request_pad(streammux, pad_name);

source_id is always the same then it removes the stream and creates a new stream when I change the source. In what condition it calls NvMOT_RemoveStreams method?