Error in Deepstream 6.1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 6.1

0:03:51.707872721  7069 0x557e0d453400 WARN                 nvinfer gstnvinfer.cpp:1388:convert_batch_and_push_to_input_thread:<secondary-infer-engine2> error: NvBufSurfTransform failed with error -3 while converting buffer
ERROR from element secondary-infer-engine2: NvBufSurfTransform failed with error -3 while converting buffer
Error details: gstnvinfer.cpp(1388): convert_batch_and_push_to_input_thread (): /GstPipeline:ANPR-pipeline/GstNvInfer:secondary-infer-engine2
0:03:51.733218929  7069 0x557e0d453460 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<secondary-infer-engine1> error: Internal data stream error.
0:03:51.733243723  7069 0x557e0d453460 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<secondary-infer-engine1> error: streaming stopped, reason error (-5)

I’m running 3 models. pgie, sgie1 for detection and sgie2 for classification. However, after running for 4000 frames suddenly the application crashes with the above error. Below is my config file:

source-list:
  list: file:///home/mainak/../../../

streammux:
  batch-size: 1
  batched-push-timeout: 40000
  width: 1280
  height: 736
  attach-sys-ts : 1
  live-source : 1

osd:
  process-mode: 0
  display-text: 0

#If there is ROI
analytics-config:
        #filename: config_nvdsanalytics.txt

triton:
  ## 0:disable 1:enable
  enable: 0
  ##0:trtion-native 1:triton-grpc
  type: 0
  ##car mode, 1:US car plate model|2: Chinese car plate model
  car-mode: 1

output:
  ## 1:file ouput  2:fake output 3:eglsink output
  type: 1
  ## 0: H264 encoder  1:H265 encoder
  enc: 0
  bitrate: 4000000
  ##The file name without suffix
  filename: anpr

primary-gie:
  ##For car detection
  config-file-path: ./pgie_config.yml
  unique-id: 1

secondary-gie-0:
  ##For US car plate
  config-file-path: ./sgie1_config.yml
  ##For China mainland car plate
  #config-file-path: lpd_ccpd_yolov4-tiny_config.yml
  unique-id: 2
  process-mode: 2

secondary-gie-1:
  ##For US car plate recognization
  config-file-path: ./lpr_config_sgie_us.yml
  ##For China mainland car plate recognization
  #config-file-path: lpr_config_sgie_ch.yml
  unique-id: 3
  process-mode: 2

I noticed changing streammux width and height affects the crash time. However it certainly crashes.
Below is my pipeline:

/* Use nvinfer to infer on batched frame. */
    pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
    sgie1 = gst_element_factory_make("nvinfer", "secondary-infer-engine1");
    sgie2 = gst_element_factory_make("nvinfer", "secondary-infer-engine2");
    tracker = gst_element_factory_make("nvtracker", "nvtracker");

    /* Add queue elements between every two elements */
    queue1 = gst_element_factory_make("queue", "queue1");
    queue2 = gst_element_factory_make("queue", "queue2");
    queue3 = gst_element_factory_make("queue", "queue3");
    queue4 = gst_element_factory_make("queue", "queue4");
    queue5 = gst_element_factory_make("queue", "queue5");
    queue6 = gst_element_factory_make("queue", "queue6");
    queue7 = gst_element_factory_make("queue", "queue7");
    queue8 = gst_element_factory_make("queue", "queue8");

    /* Use nvdslogger for perf measurement. */
    nvdslogger = gst_element_factory_make("nvdslogger", "nvdslogger");

    /* Use nvtiler to composite the batched frames into a 2D tiled array based
     * on the source of the frames. */
    // tiler = gst_element_factory_make("nvmultistreamtiler", "nvtiler");

    /* Use convertor to convert from NV12 to RGBA as required by nvosd */
    nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");

    /* Create OSD to draw on the converted RGBA buffer */
    nvosd = gst_element_factory_make("nvdsosd", "nv-onscreendisplay");

    /* Create Sink*/
    // sink = gst_element_factory_make("fakesink", "nvvideo-renderer");
    sink = gst_element_factory_make("nveglglessink", "nvvideo-renderer"); // for display
    // sink = gst_element_factory_make("fakesink", "nvvideo-renderer");

    if (!pgie || !nvdslogger || !nvvidconv || !nvosd || !sink)
    {
        g_printerr("One element could not be created. Exiting.\n");
        return -1;
    }

    if (g_str_has_suffix(argv[1], ".yml") || g_str_has_suffix(argv[1], ".yaml"))
    {

        nvds_parse_streammux(streammux, argv[1], "streammux");

        nvds_parse_gie(pgie, argv[1], "primary-gie");
        nvds_parse_gie(sgie1, argv[1], "secondary-gie-0");
        nvds_parse_gie(sgie2, argv[1], "secondary-gie-1");

        g_object_get(G_OBJECT(pgie), "batch-size", &pgie_batch_size, NULL);
        if (pgie_batch_size != num_sources)
        {
            g_printerr("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
                       pgie_batch_size, num_sources);
            g_object_set(G_OBJECT(pgie), "batch-size", num_sources, NULL);
        }

        nvds_parse_osd(nvosd, argv[1], "osd");

        // tiler_rows = (guint)sqrt(num_sources);
        // tiler_columns = (guint)ceil(1.0 * num_sources / tiler_rows);
        // g_object_set(G_OBJECT(tiler), "rows", tiler_rows, "columns", tiler_columns, NULL);

        // nvds_parse_tiler(tiler, argv[1], "tiler");
        // nvds_parse_egl_sink(sink, argv[1], "sink");
        g_object_set(G_OBJECT(sink), "qos", 0, NULL);
        g_object_set(G_OBJECT(sink), "sync", 0, NULL);

    }
    else
    {

        g_object_set(G_OBJECT(streammux), "batch-size", num_sources, NULL);

        g_object_set(G_OBJECT(streammux), "width", MUXER_OUTPUT_WIDTH, "height",
                     MUXER_OUTPUT_HEIGHT,
                     "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

        /* Configure the nvinfer element using the nvinfer config file. */
        g_object_set(G_OBJECT(pgie),
                     "config-file-path", "./models/ped_pgie_config.txt", NULL);

        /* Override the batch-size set in the config file with the number of sources. */
        g_object_get(G_OBJECT(pgie), "batch-size", &pgie_batch_size, NULL);
        if (pgie_batch_size != num_sources)
        {
            g_printerr("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
                       pgie_batch_size, num_sources);
            g_object_set(G_OBJECT(pgie), "batch-size", num_sources, NULL);
        }

        // tiler_rows = (guint)sqrt(num_sources);
        // tiler_columns = (guint)ceil(1.0 * num_sources / tiler_rows);
        // /* we set the tiler properties here */
        // g_object_set(G_OBJECT(tiler), "rows", tiler_rows, "columns", tiler_columns,
        //              "width", TILED_OUTPUT_WIDTH, "height", TILED_OUTPUT_HEIGHT, NULL);

        // g_object_set(G_OBJECT(nvosd), "process-mode", OSD_PROCESS_MODE,
        //              "display-text", OSD_DISPLAY_TEXT, NULL);

        g_object_set(G_OBJECT(sink), "qos", 0, NULL);
    }

    g_object_set(
        G_OBJECT(tracker), "tracker-width", MUXER_OUTPUT_WIDTH, "tracker-height", MUXER_OUTPUT_HEIGHT,
        "gpu_id", 0, "ll-lib-file",
        "/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so",
        "ll-config-file", "/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml", "enable_batch_process", 1,
        NULL);

    // /*Use this for multifilesink*/
    // g_object_set(
    //     G_OBJECT(sink), "location", "/home/mainak/ms/C++/bbpl/pedestrian/output/image_%02d.png", "async", 0, NULL);

    /* we add a message handler */
    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
    gst_object_unref(bus);

    /* Set up the pipeline */
    /* we add all elements into the pipeline */
    if (transform)
    {
        gst_bin_add_many(GST_BIN(pipeline), queue1, pgie, queue2, nvdslogger,
                         queue3, nvvidconv, queue4, nvosd, queue5, transform, sink, NULL);
        /* we link the elements together
         * nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd
         * -> video-renderer */
        if (!gst_element_link_many(streammux, queue1, pgie, queue2, nvdslogger,
                                   queue3, nvvidconv, queue4, nvosd, queue5, transform, sink, NULL))
        {
            g_printerr("Elements could not be linked. Exiting.\n");
            return -1;
        }
    }
    else
    {
        gst_bin_add_many(GST_BIN(pipeline), queue1, pgie, queue2, tracker, queue3, sgie1, queue4,
                         sgie2, queue5, nvdslogger, queue6, nvvidconv, queue7, nvosd, queue8, sink, NULL);
        /* we link the elements together
         * nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd
         * -> video-renderer */
        if (!gst_element_link_many(streammux, queue1, pgie, queue2, tracker, queue3, sgie1, queue4,
                                   sgie2, queue5, nvdslogger, nvvidconv, queue7, nvosd, queue8, sink, NULL))
        {
            g_printerr("Elements could not be linked. Exiting.\n");
            return -1;
        }
    }

    /*Creat Context for Object Encoding */
    NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context();
    if (!obj_ctx_handle)
    {
        g_print("Unable to create context\n");
        return -1;
    }

    if (save_img == 1)
    {
        /* Lets add probe to get informed of the meta data generated, we add probe to
         * the sink pad of the osd element, since by that time, the buffer would have
         * had got all the metadata. */
        gie_src_pad = gst_element_get_static_pad(tracker, "src");
        if (!gie_src_pad)
            g_print("Unable to get src pad\n");
        else
            gst_pad_add_probe(gie_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                              gie_src_pad_buffer_probe, (gpointer)obj_ctx_handle, NULL);
        gst_object_unref(gie_src_pad);
    }

    /* Lets add probe to get informed of the meta data generated, we add probe to
     * the sink pad of the osd element, since by that time, the buffer would have
     * had got all the metadata. */
    osd_src_pad = gst_element_get_static_pad(sgie2, "src");
    if (!osd_src_pad)
        g_print("Unable to get sink pad\n");
    else
        gst_pad_add_probe(osd_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                          osd_src_pad_buffer_probe, (gpointer)obj_ctx_handle, NULL);
    gst_object_unref(osd_src_pad);

    /* Set the pipeline to "playing" state */
    if (g_str_has_suffix(argv[1], ".yml") || g_str_has_suffix(argv[1], ".yaml"))
    {
        g_print("Using file: %s\n", argv[1]);
    }
    else
    {
        g_print("Now playing:");
        for (i = 0; i < num_sources; i++)
        {
            g_print(" %s,", argv[i + 1]);
        }
        g_print("\n");
    }
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    /* Wait till pipeline encounters an error or EOS */
    g_print("Running...\n");
    g_main_loop_run(loop);

    /* Out of the main loop, clean up nicely */
    g_print("Returned, stopping playback\n");
    gst_element_set_state(pipeline, GST_STATE_NULL);
    g_print("Deleting pipeline\n");
    gst_object_unref(GST_OBJECT(pipeline));
    g_source_remove(bus_watch_id);
    g_main_loop_unref(loop);

    /** Paho MQTT*/
    g_print("Disconnecting MQTT Client\n");
    if ((rc = MQTTClient_disconnect(client, 10000)) != MQTTCLIENT_SUCCESS)
        printf("Failed to disconnect, return code %d\n", rc);
    MQTTClient_destroy(&client);
    g_print("Destroying MQTT Client\n");

    return 0;

ANy help is highly appreciated. @fanzh @yuweiw

Hi,

The error you are getting is NvBufSurfTransformError_Invalid_Params. I’ve seen that error in the past when NvBufferTransform gets a memory type that it doesn’t like.

You can try adding a nvvideoconvert after the streammux with nvbuf-memory-type=3 or nvbuf-memory-type=2 to see if that solves your issue.

Hi @miguel.taylor ,
I did the following changes:

 /* Use convertor to convert from NV12 to RGBA as required by nvosd */
    nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");
    g_object_set (G_OBJECT (nvvidconv), "nvbuf-memory-type", 3 , NULL);

and finally:

gst_bin_add_many(GST_BIN(pipeline), queue1, nvvidconv, queue7, pgie, queue2, tracker, queue3, sgie1, queue4,
                         sgie2, queue5, nvdslogger, queue6, nvosd, queue8, sink, NULL);
        /* we link the elements together
         * nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd
         * -> video-renderer */
        if (!gst_element_link_many(streammux, queue1, nvvidconv, queue7, pgie, queue2, tracker, queue3, sgie1, queue4,
                                   sgie2, queue5, nvdslogger, nvosd, queue8, sink, NULL))
        {
            g_printerr("Elements could not be linked. Exiting.\n");
            return -1;
        }

However the problem still persists!!! Did I follow the suggestion provided by you correctly? Or is there something I’m missing?
Here’s the error:

 WARN                 nvinfer gstnvinfer.cpp:1388:convert_batch_and_push_to_input_thread:<secondary-infer-engine2> error: NvBufSurfTransform failed with error -3 while converting buffer
ERROR from element secondary-infer-engine2: NvBufSurfTransform failed with error -3 while converting buffer
Error details: gstnvinfer.cpp(1388): convert_batch_and_push_to_input_thread (): /GstPipeline:ANPR-pipeline/GstNvInfer:secondary-infer-engine2

What does your complete pipeline look like? Can you share the complete code?

Or can you try upgrading to the latest version?

No problem can be seen from the current code

Sorry, your code is incomplete and I can’t compile it.

1.Your pipeline can run, but crashes after running for a while? Is it?

2.try DBG_NVBUFSURFTRANSFORM=1 ./your_app parameters > log.log 2>&1, then share the log file.

I want to know what happen in NvBufSurfTransform.

  1. Yes it crashes after sometime. As I adjust the streammux height and width parameter the time period of crash changes but it certainly crashes.
  2. I will share the log asap
mainak@ms$ DBG_NVBUFSURFTRANSFORM=1 ./ANPR_KP ./models/anpr_config.yml ./config.json > log_11.txt

I ran the above, where ANPR_KP is the application. I’m attaching the generated log. Is this what you were asking for?
log_11.txt (18.8 MB)
Basically in the log I’m printing data from the probe functions attached. For each frame NvBufSurfTransform details gets printed.
please check this below line in the log(line no: 242848 and 242850):

nvbufsurftransform.cpp:3278: NvBufSurfTransform_GPU_CuTex=> SrcCrop rect's left must not be greater than width

nvbufsurftransform.cpp:3791: NvBufSurfTransform_GPU=> Error(-3) returned

Pipeline’s streammux width = 1280 , height =736,
tracker dimension is same with streammux dimension.
input size: 1920x1080
I shall be highly obliged.

If the problem is indeed caused by sgie, you can try to debug get_converted_buffer in the gst_nvinfer_process_objects function.
From the error log, it seems that the bbox exceeds the width and height of the video.

Are certain frames bound to cause errors?

/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp

Or trying a new version is a faster way to solve the problem

How can we discard those entries whose bbox values are outside video’s height and width?
Out of curiosity I’m asking why such problem arises when everything is all right?
Final question before I close the topic which version of DS should I use? 6.2 or 6.3?

The above is just speculation based on the error log, not the root cause. The output of bbox depends on the model. You can check fillDetectionOutput in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp

I think you should try the latest DS-7.0

Actually we already have 20 Jetson NANO(2GB) and was planning to deploy on them. Hence I choose DS 6.1 because of Jetpack compatibility. Now upgrading to DS 6.2 and above requires Jetpack that are outside NANO’s support!!! JetPack Archive | NVIDIA Developer
Huge problem !!!

I’m using YOLOv5 for detection. ANy suggestions on how to scale everything correctly? Here is the function that seems to do the job:

/* Parse all object bounding boxes for the class `classIndex` in the frame
 * meeting the minimum threshold criteria.
 *
 * This parser function has been specifically written for the sample resnet10
 * model provided with the SDK. Other models will require this function to be
 * modified.
 */
bool
DetectPostprocessor::parseBoundingBox(vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    vector<NvDsInferObjectDetectionInfo>& objectList)
{

    int outputCoverageLayerIndex = -1;
    int outputBBoxLayerIndex = -1;


    for (unsigned int i = 0; i < outputLayersInfo.size(); i++)
    {
        if (strstr(outputLayersInfo[i].layerName, "bbox") != nullptr)
        {
            outputBBoxLayerIndex = i;
        }
        if (strstr(outputLayersInfo[i].layerName, "cov") != nullptr)
        {
            outputCoverageLayerIndex = i;
        }
    }

    if (outputCoverageLayerIndex == -1)
    {
        printError("Could not find output coverage layer for parsing objects");
        return false;
    }
    if (outputBBoxLayerIndex == -1)
    {
        printError("Could not find output bbox layer for parsing objects");
        return false;
    }

    float *outputCoverageBuffer =
        (float *)outputLayersInfo[outputCoverageLayerIndex].buffer;
    float *outputBboxBuffer =
        (float *)outputLayersInfo[outputBBoxLayerIndex].buffer;

    NvDsInferDimsCHW outputCoverageDims;
    NvDsInferDimsCHW outputBBoxDims;

    getDimsCHWFromDims(outputCoverageDims,
        outputLayersInfo[outputCoverageLayerIndex].inferDims);
    getDimsCHWFromDims(
        outputBBoxDims, outputLayersInfo[outputBBoxLayerIndex].inferDims);

    unsigned int targetShape[2] = { outputCoverageDims.w, outputCoverageDims.h };
    float bboxNorm[2] = { 35.0, 35.0 };
    float gcCenters0[targetShape[0]];
    float gcCenters1[targetShape[1]];
    int gridSize = outputCoverageDims.w * outputCoverageDims.h;
    int strideX = DIVIDE_AND_ROUND_UP(networkInfo.width, outputBBoxDims.w);
    int strideY = DIVIDE_AND_ROUND_UP(networkInfo.height, outputBBoxDims.h);

    for (unsigned int i = 0; i < targetShape[0]; i++)
    {
        gcCenters0[i] = (float)(i * strideX + 0.5);
        gcCenters0[i] /= (float)bboxNorm[0];
    }
    for (unsigned int i = 0; i < targetShape[1]; i++)
    {
        gcCenters1[i] = (float)(i * strideY + 0.5);
        gcCenters1[i] /= (float)bboxNorm[1];
    }

    unsigned int numClasses =
        std::min(outputCoverageDims.c, detectionParams.numClassesConfigured);
    for (unsigned int classIndex = 0; classIndex < numClasses; classIndex++)
    {

        /* Pointers to memory regions containing the (x1,y1) and (x2,y2) coordinates
         * of rectangles in the output bounding box layer. */
        float *outputX1 = outputBboxBuffer
            + classIndex * sizeof (float) * outputBBoxDims.h * outputBBoxDims.w;

        float *outputY1 = outputX1 + gridSize;
        float *outputX2 = outputY1 + gridSize;
        float *outputY2 = outputX2 + gridSize;

        /* Iterate through each point in the grid and check if the rectangle at that
         * point meets the minimum threshold criteria. */
        for (unsigned int h = 0; h < outputCoverageDims.h; h++)
        {
            for (unsigned int w = 0; w < outputCoverageDims.w; w++)
            {
                int i = w + h * outputCoverageDims.w;
                float confidence = outputCoverageBuffer[classIndex * gridSize + i];

                if (confidence < detectionParams.perClassPreclusterThreshold[classIndex])
                    continue;

                float rectX1Float, rectY1Float, rectX2Float, rectY2Float;

                /* Centering and normalization of the rectangle. */
                rectX1Float =
                    outputX1[w + h * outputCoverageDims.w] - gcCenters0[w];
                rectY1Float =
                    outputY1[w + h * outputCoverageDims.w] - gcCenters1[h];
                rectX2Float =
                    outputX2[w + h * outputCoverageDims.w] + gcCenters0[w];
                rectY2Float =
                    outputY2[w + h * outputCoverageDims.w] + gcCenters1[h];

                rectX1Float *= -bboxNorm[0];
                rectY1Float *= -bboxNorm[1];
                rectX2Float *= bboxNorm[0];
                rectY2Float *= bboxNorm[1];

                /* Clip parsed rectangles to frame bounds. */
                if (rectX1Float >= (int)m_NetworkInfo.width)
                    rectX1Float = m_NetworkInfo.width - 1;
                if (rectX2Float >= (int)m_NetworkInfo.width)
                    rectX2Float = m_NetworkInfo.width - 1;
                if (rectY1Float >= (int)m_NetworkInfo.height)
                    rectY1Float = m_NetworkInfo.height - 1;
                if (rectY2Float >= (int)m_NetworkInfo.height)
                    rectY2Float = m_NetworkInfo.height - 1;

                if (rectX1Float < 0)
                    rectX1Float = 0;
                if (rectX2Float < 0)
                    rectX2Float = 0;
                if (rectY1Float < 0)
                    rectY1Float = 0;
                if (rectY2Float < 0)
                    rectY2Float = 0;

                //Prevent underflows
                if(((rectX2Float - rectX1Float) < 0) || ((rectY2Float - rectY1Float) < 0))
                    continue;

                objectList.push_back({ classIndex, rectX1Float,
                         rectY1Float, (rectX2Float - rectX1Float),
                         (rectY2Float - rectY1Float), confidence});
            }
        }
    }
    return true;
}

This is specific for resnet. For YOLOV5 i’m setting parse-bbox-func-name property in config.yml as required.

You gave the wrong device type, Jetson nano cannot be upgraded to a version higher than DS DS-6.0.1.

https://docs.nvidia.com/metropolis/deepstream/6.3/dev-guide/text/DS_Quickstart.html#platform-and-os-compatibility

If you use yolov5, you can refer to the code below.

Can your current problem be solved by changing the detection model?

I’m developing in dGPU(for POC). But have to deploy it in 20 Jetson NANO. Even in dGPU if I develop in DS 6.2 can I run it NANO? I guess so not. Hence the problem

Actually the pgie is yolov5 and sgie1 is yolov4. Can that be the reason for such error? How to set the net-scale-factor property for sgie1,like how means on what basis?

dGPU is not fully compatible with Jetson, and there are some minor differences using the same version. Generally speaking, porting is not complicated

I don’t think this causes issues. About net-scale-factor refer this link. Gst-nvinfer — DeepStream documentation 6.4 documentation

The low-level library preprocesses the transformed frames
 (performs normalization and mean subtraction) and produces final float RGB/BGR/GRAY planar data which is passed to the TensorRT engine for inferencing.

Can a mismatch in tensorrt version be the problem?

This may not be the root cause.

Since Jetson nano cannot be upgraded to a higher version,

It is recommended that you merge the attach_metadata_detector(in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp) from DS-7.0 to DS-6.0 to see if it can solve your problem

I upgraded to DS 6.3 as I have ubuntu 20.04. Now the problem is no longer there. Thanks.
Lastly, can I run docker image of DS 6.3 on NVIDIA Jetson NANO 2GB?

I think this is not feasible because some libraries on Jetson are shared between the host and Docker. This means Docker on Jetson is associated with BSP.

I still recommend that you merge the nvinfer code on the deployment jetson nano.