Jetson Nano

Hardware :- Jetson Nano
CUDA :- 10.2.300
cuDNN :- 8.2.1.3
TRT :- 8.0.1.6
Jetpack :- 4.6

Im getting this error while I run pipeline.

Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Opening in BLOCKING MODE
0:00:05.082479335 2825 0x559d782c10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/nano/silpa/troisai-wms2.0/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output 25607

0:00:05.082641059 2825 0x559d782c10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/nano/silpa/troisai-wms2.0/model_b1_gpu0_fp32.engine
0:00:05.119537916 2825 0x559d782c10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.636076943 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:custom_model_pipeline/GstNvInfer:primary-nvinference-engine
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Returned, stopping playback
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.644459989 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.652174942 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:05.659513740 2825 0x559d005f70 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop: error: Failed to queue input batch for inferencing
Deleting pipeline

Which sample and model do you use ?

This error means that the model conversion failed

Can you share your code and configure file ?

I was using yolov5n instance segmentation model and trying to get access the detection only
The code is:

include <gst/gst.h>
include <glib.h>
include <stdio.h>
include <cuda_runtime_api.h>
include “gstnvdsmeta.h”
include “gstnvdsmeta.h”
include “gst-nvmessage.h”
include “gstnvdsinfer.h”

define MUXER_OUTPUT_WIDTH 1920
define MUXER_OUTPUT_HEIGHT 1080

define PGIE_CLASS_ID_VEHICLE 2
define PGIE_CLASS_ID_PERSON 0

define MUXER_BATCH_TIMEOUT_USEC 40000

gint frame_number = 0;

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{
GstBuffer *buf = (GstBuffer *) info->data;
guint num_rects = 0;
//NvDsObjectMeta *obj_meta = NULL;
guint vehicle_count = 0;
guint person_count = 0;
//NvDsMetaList * l_frame = NULL;
//NvDsMetaList * l_obj = NULL;
NvDsDisplayMeta *display_meta = NULL;
//NvDsInferSegmentationMeta * user_meta = NULL;
NvDsUserMeta *obj_meta = NULL;
NvDsMetaList * l_frame = NULL;
NvDsMetaList * l_obj = NULL;
NvDsInferSegmentationMeta * user_meta = NULL;
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
  l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    int offset = 0;
    g_print("yes1 \n");
            for (l_obj = frame_meta->frame_user_meta_list; l_obj != NULL;
            l_obj = l_obj->next) {
               
        obj_meta = (NvDsUserMeta *) (l_obj->data);
        user_meta = (NvDsInferSegmentationMeta *) (obj_meta->user_meta_data);
        g_print("silpa1 = %d", user_meta->classes);
        g_print("silpa2 = %d", user_meta->width);
        g_print("IN frame meta list ");
    }
    

 }

g_print ("Frame Number = %d Number of objects = %d "
        "Vehicle Count = %d Person Count = %d\n",
        frame_number, num_rects, vehicle_count, person_count);
frame_number++;
return GST_PAD_PROBE_OK;

}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
GMainLoop *loop = (GMainLoop *) data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_EOS:
g_print (“End of stream\n”);
g_main_loop_quit (loop);
break;
case GST_MESSAGE_ERROR:{
gchar *debug;
GError *error;
gst_message_parse_error (msg, &error, &debug);
g_printerr (“ERROR from element %s: %s\n”,
GST_OBJECT_NAME (msg->src), error->message);
if (debug)
g_printerr (“Error details: %s\n”, debug);
g_free (debug);
g_error_free (error);
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}

int
main (int argc, char *argv)
{
GMainLoop *loop = NULL;
GstElement *pipeline = NULL, *source = NULL,*h264parser = NULL,
*decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL, *nvvidconv = NULL;
GstPad *seg_src_pad = NULL;
GstBus *bus = NULL;
guint bus_watch_id;
int current_device = -1;
cudaGetDevice(&current_device);
struct cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, current_device);
if (argc != 2) {
g_printerr (“Usage: %s \n”, argv[0]);
return -1;
}

gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
pipeline = gst_pipeline_new (“custom_model_pipeline”);
source = gst_element_factory_make (“filesrc”, “file-source”);
h264parser = gst_element_factory_make (“h264parse”, “h264-parser”);
decoder = gst_element_factory_make (“nvv4l2decoder”, “nvv4l2-decoder”);
streammux = gst_element_factory_make (“nvstreammux”, “stream-muxer”);
if (!pipeline || !streammux) {
g_printerr (“One element could not be created. (!pipeline || !streammux) Exiting.\n”);
return -1;
}

pgie = gst_element_factory_make (“nvinfer”, “primary-nvinference-engine”);
nvvidconv = gst_element_factory_make (“nvvideoconvert”, “nvvideo-converter”);
sink = gst_element_factory_make (“multifilesink”, “filesink”);
if (!source || !h264parser || !decoder || !pgie
|| !nvvidconv || !sink) {
g_printerr (“One element could not be created. (!source || !h264parser || !decoder || !pgie || !nvvidconv || !sink) Exiting.\n”);
return -1;
}
g_object_set (G_OBJECT (sink), “location”, “./saved/image_%04d.jpg”, NULL);
g_object_set (G_OBJECT (source), “location”, argv[1], NULL);
g_object_set (G_OBJECT (streammux), “batch-size”, 1, NULL);
g_object_set (G_OBJECT (streammux), “width”, MUXER_OUTPUT_WIDTH, “height”,
MUXER_OUTPUT_HEIGHT,
“batched-push-timeout”, MUXER_BATCH_TIMEOUT_USEC, NULL);

g_object_set (G_OBJECT (pgie),
“config-file-path”, “dstest1_pgie_config.txt”, NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);

gst_bin_add_many (GST_BIN (pipeline),
source, h264parser, decoder, streammux, pgie,
nvvidconv, sink, NULL);

GstPad *sinkpad, *srcpad;
gchar pad_name_sink[16] = “sink_0”;
gchar pad_name_src[16] = “src”;
sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
if (!sinkpad) {
g_printerr (“Streammux request sink pad failed. Exiting.\n”);
return -1;
}
srcpad = gst_element_get_static_pad (decoder, pad_name_src);
if (!srcpad) {
g_printerr (“Decoder request src pad failed. Exiting.\n”);
return -1;
}
if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
g_printerr (“Failed to link decoder to stream muxer. Exiting.\n”);
return -1;
}
gst_object_unref (sinkpad);
gst_object_unref (srcpad);

if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
g_printerr (“Elements could not be linked: 1. Exiting.\n”);
return -1;
}
if (!gst_element_link_many (streammux, pgie,
sink, NULL)) {
g_printerr (“Elements could not be linked: 2. Exiting.\n”);
return -1;
}

seg_src_pad = gst_element_get_static_pad (sink, “sink”);
if (!seg_src_pad)
g_print (“Unable to get src pad\n”);
else
gst_pad_add_probe (seg_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
osd_sink_pad_buffer_probe, NULL, NULL);
gst_object_unref (seg_src_pad);

g_print (“Now playing: %s\n”, argv[1]);
gst_element_set_state (pipeline, GST_STATE_PLAYING);
g_print (“Running…\n”);
g_main_loop_run (loop);
g_print (“Returned, stopping playback\n”);
gst_element_set_state (pipeline, GST_STATE_NULL);
g_print (“Deleting pipeline\n”);
gst_object_unref (GST_OBJECT (pipeline));
g_source_remove (bus_watch_id);
g_main_loop_unref (loop);
return 0;

}

config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/home/nano/silpa/troisai-wms2.0/yolov5n-seg-super.onnx
#model-engine-file=/home/nano/silpa/troisai-wms2.0/latest_wasnik.engine
#int8-calib-file=calib.table
labelfile-path=/home/nano/silpa/troisai-wms2.0/yolov5_seg/labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=2
cluster-mode=2
#maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
#parse-bbox-instance-mask-func-name=NvDsInferParseCustomBatchedNMSTLTMask
custom-lib-path=/home/nano/silpa/troisai-wms2.0/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

The configuration and source code look ok.

Does your YoloV5 use this project ?

I can run successfully.

Make sure your onnx is converted and run on the same device.

I am using YoloV5n-segmentation model. and the onnx is converted and run on the same device.

I can successfully run this model on DeepStream 6.2

Due to hardware limitations, there is no way to fall back to DeepStream 6.0 currently.

Can you update Tensorrt to 8.2 and update DeepStream to 6.0.1 ?

I am using deepstream 6.0.1 and tensorrt 8.0.1. So i will try with Tensorrt
8.2

Hi,
I tried with the versions of tensorrt 8.2 and Deepstream 6.0.1, the pipeline is working and getting EOS, but not able to access the meta data.

Do you use deepstream-app ? Make sure deepstream-app can work normal first.
If you see bbox on display, object meta data should be accessible

Yes i used deepstream-app , but i was not able to see the bbox on display.

network-type =2 means this is a semantic segmentation network.

Here is the description

Modify it to 0, you can see the bbox.

deepstream-segmentation-test is a sample for semantic segmentation, you can refer it.

Thanks

i changed it to 0, still there is no change

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It relate about the model.

I tried yolov5s, bbox can be drawn normally.

When using yolov5-seg, you can only see the meta data from the log. Due to the limitation of DeepStream, it cannot be displayed on the OSD.

Unless you use opencv to draw by yourself

If you want a simple mask, deepstream-segmentation-test can help you.

like this

./deepstream-segmentation-app config_infer_primary_yoloV5.txt /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4

config_infer_primary_yoloV5.txt like this

Here is patch about deepstream-segmentation-test
out.patch (2.2 KB)

In config_infer_primary_yoloV5.txt modify network-type value to 2

The display of semantic segmentation in osd is already in roadmap

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.