There is a error when run deepstream-mrcnn-app

TRT will do, nvinfer plugin is open sourced, you can check the source code, and if want more information about TRT, you may need to read TensorRT related documentation.

what model( or input for TRT) should be used for TRT to convert it to the engine (“mask_rcnn_resnet50.etlt-b1-gpu0-int8.engine”?

i want to run the “deepstream-mrcnn-test” firstly and to see some results for understanding the example.

would you like to give me the engine directly and i can run the sample as soon as possible?

i thought that by using the tlt-converter, the mask_rcnn_resnet50.etlt can be converted to the mask_rcnn_resnet50.etlt-b1-gpu0-int8.engine.

is my understanding correct?
if the understanding is correct, how to get the tlt-converter?
can the tlt-converter run on TX2?

You do not necessarily need the tlt-converter, follow the readme, and you do not need to seperately build the engine, the sample will do.

OK, i try it .

according to your instructions, i download and copy the libnvinfer_plugin.so.7.0.0.1 to the /usr/lib/arrach64-linux-gnu/libnvinfer_plugin.so.7.1.3.

running the deepstream-mrcnn-app, i receive a new error that do not occur before.

the error is ERROR:[TRT]/“Failed to parse UFF model”/ “build engine file failed”

how do i do next?
would you like to give me a help?

below the error:

$ deepstream-mrcnn-app -i sample_720p.h264 -p libnvds_amqp_proto.so -c cfg_amqp.txt --topic=“topicname” --conn-str=“localhost;5672;guest;guest”
2020-11-04 15:28:47.055172: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing: sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:02.783172266 9506 0x211f7920 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:02.783292231 9506 0x211f7920 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:02.783322214 9506 0x211f7920 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: multilevel_propose_rois: Unsupported operation _MultilevelProposeROI_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:03.855574980 9506 0x211f7920 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Hi,
There should have one nvinfer plugin for version 7.1, please use this one.
deepstream_tlt_apps/TRT-OSS/Jetson/TRT7.1

Thanks for your reply!

i will download and build TRT7.1 from the following web:
NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson/TRT7.1

is my understanding right?

i build the TensorRT OSS Plugin by following the instruction that come from the TRT-OSS/Jetson/TRT7.1.

when go to the step of “cd $TRT_SOURCE” , occur a error: “bash:cd:pwd: No such file or directory”.
i skip the step and do next step by step and got the libnvinfer_plugin.so.7.1.3 in the directory of /TensorRT/build.

but, i can not find ‘pwd’/out and the libnvinfer_plugin.so.7.1.0 in that directory .
would you like to give a help?

below are my steps from the /TRT-OSS/TRT7.1, BUT no the “cd $TRT_SOURCE”:

git clone -b release/7.1 GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=pwd
############################(cd $TRT_SOURCE)
mkdir -p build && cd build
/usr/local/bin/cmake … -DGPU_ARCHS=62 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out
make nvinfer_plugin -j$(nproc)

i can make the TRT7.1 successfully.
but, i can not go to the directory of pwd/out .

it is said that
“After building ends successfully, libnvinfer_plugin.so* will be generated under pwd /out/”

how to get the result of libnvinfer_plugin.so.7.1.0?
would you like to give me a help?

Build TensorRT OSS Plugin

git clone -b release/7.1 https://github.com/nvidia/TensorRT
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=`pwd`
cd $TRT_SOURCE
mkdir -p build && cd build
/usr/local/bin/cmake .. -DGPU_ARCHS="53 62 72" -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
make nvinfer_plugin -j$(nproc)

Now,i can run the “deepstream-mrcnn-app”, but nothing can be detected.

ERROR: serialize engine failed because of mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine open error.

below my command line and the errors:

$ deepstream-mrcnn-app -i sample_720p.h264 -p libnvds_amqp_proto.so -c cfg_amqp.txt --topic=“topicname” --conn-str=“localhost;5672;guest;guest”
Now playing: sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:02.990171625 9756 0x3b4b3520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:02.990269319 9756 0x3b4b3520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:02.990294918 9756 0x3b4b3520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files

INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.

ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine opened error
0:04:41.314076041 9756 0x3b4b3520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]:
failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine

INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT Input 3x832x1344
1 OUTPUT kFLOAT generate_detections 100x6
2 OUTPUT kFLOAT mask_head/mask_fcn_logits/BiasAdd 100x2x28x28

0:04:41.379457302 9756 0x3b4b3520 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dsmrcnn_pgie_config.txt sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Vehicle Count = 0 Person Count = 0
Frame Number = 2 Vehicle Count = 0 Person Count = 0
Frame Number = 3 Vehicle Count = 0 Person Count = 0
Frame Number = 4 Vehicle Count = 0 Person Count = 0
Frame Number = 5 Vehicle Count = 0 Person Count = 0
Frame Number = 6 Vehicle Count = 0 Person Count = 0
Frame Number = 7 Vehicle Count = 0 Person Count = 0
Frame Number = 8 Vehicle Count = 0 Person Count = 0
Frame Number = 9 Vehicle Count = 0 Person Count = 0
Frame Number = 10 Vehicle Count = 0

Hi,
Sorry for a late reply,
Can you do some trouble shooting first, you can refer to this,

osd_sink_pad_buffer_probe{

//objects counted here, you can add some print to check which part goes wrong
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
vehicle_count++;
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON)
person_count++;

}

Hello, I was running into the same problem as seeklover77 in their last post.
I seem to not be detecting any vehicles or people in the image as well when I run the following command:

./deepstream-mrcnn-app -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_amqp_proto.so --conn-str=“10.0.0.143;5672;guest;guest” --topic=“MaskTest”

I followed up to your most recent post, and after placing some prints around here and there it seems like the obj_meta_list is NULL.
The hardware that I am running this on is a Jetson Nano.
Also, am I running the right video? I know in the readme it said to run sample_720p.h264, but just wanted to make sure that was the right file to run.

Update: I have looked at the frame_meta data in a debugger and have gotten this information if this helps:
{base_meta = {batch_meta = 0x7f60002ef0, meta_type = NVDS_FRAME_META, uContext = 0x0, copy_func = 0x0, release_func = 0x7fb76468c0}, pad_index = 0, batch_id = 0, frame_num = 0, buf_pts = 0, ntp_timestamp = 1605642279787139000, source_id = 0, num_surfaces_per_frame = 1, source_frame_width = 1280, source_frame_height = 720, surface_type = 0, surface_index = 0, num_obj_meta = 0, bInferDone = 1, obj_meta_list = 0x0, display_meta_list = 0x0, frame_user_meta_list = 0x0, misc_frame_info = {0, 0, 0, 0}, reserved = {0, 0, 0, 0}}

Hi @Amycao, I am having the same issue as @Wasabi-Bobby and @seeklover77 .

I am running DS5 on jetson nano. The deepstream-mrcnn-app never contains any object metadata structures. The model runs fine, but I cannot find where the output of the inference is located.

according to your instruction, i add print() to see the results.

no any result that i added can be printed.
it seems that the osd_sink_pad_buffer_probe() can not run correctly.
or maybe there are still some mistakes i thought.

would you like to give me help?

below my print() i added:

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
if (!batch_meta) {
// No batch meta attached.
g_print(“no batch meta\n”);
return GST_PAD_PROBE_OK;
}

if (frame_meta == NULL) {
  // Ignore Null frame meta.
  g_print("frame_mete is NULL\n");
  continue;
}

  if (obj_meta == NULL) {
    // Ignore Null object.
   g_print("obj_meta is NULL\n");
    continue;
  }

  if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE){
    vehicle_count++;
    g_print("vehicle in osd = %d\n",vehicle_count);

}
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON){
person_count++;
g_print(“person in osd =%d\n”,person_count);
}

 if(obj_meta->class_id !=PGIE_CLASS_ID_VEHICLE)
      g_print("no vehicle found\n");
 if(obj_meta->class_id !=PGIE_CLASS_ID_PERSON)
     g_print("no person found\n");

return GST_PAD_PROBE_OK;
}

We are checking internally, thanks for your patience. will get back to you.

Hi,
Vehicle can be detected, tested on NANO and xavier, you can see below log,
0:00:07.230821252 14046 0x558f1a1f00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine
0:00:07.331913656 14046 0x558f1a1f00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dsmrcnn_pgie_config.txt sucessfully
Running…
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number = 0 Vehicle Count = 3 Person Count = 0
Frame Number = 1 Vehicle Count = 3 Person Count = 0
not sure what is the difference, can anyone met this issue share the model engine after built then i can have a try? i share my engine here, you also can try on your side and feedback the result.

Thank you for the fast response!
I tried out the model you have provided and still have the same results.

Here is what mine has given back (I added some extra print statements to check if some items exists):

Now playing: /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Using winsys: x11 
Opening in BLOCKING MODE 
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:11.598924542 29935   0x55c2c54730 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x832x1344      
1   OUTPUT kFLOAT generate_detections 100x6           
2   OUTPUT kFLOAT mask_head/mask_fcn_logits/BiasAdd 100x2x28x28     

0:00:11.599187361 29935   0x55c2c54730 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_fp16.engine
0:00:11.978519309 29935   0x55c2c54730 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dsmrcnn_pgie_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Looping through frame meta list
Obj meta list is null... 
Test Frame Number = 0 Vehicle Count = 0 Person Count = 0
Looping through frame meta list
Obj meta list is null... 
Test Frame Number = 1 Vehicle Count = 0 Person Count = 0
Looping through frame meta list
Obj meta list is null... 
Test Frame Number = 2 Vehicle Count = 0 Person Count = 0

Here is the link to my model that was generated through running ./deepstream in the mask-rcnn folder.

Can confirm that this engine file works and inference is being performed.

Now, how can I make my own engine file using tlt-converter because when I tried using the provided .etlt file the resulting .engine file did not result in any inference…?

Can confirm that this engine file works and inference is being performed.

Now, how can I make my own engine file using tlt-converter because when I tried using the provided .etlt file the resulting .engine file did not result in any inference…?

→ I can think of the difference maybe you are using different libnvinfer_plugin.so,
you are using sources/deepstream_tlt_apps/TRT-OSS/Jetson/libnvinfer_plugin.so.7.0.0.1 or
sources/deepstream_tlt_apps/TRT-OSS/Jetson/TRT7.1/libnvinfer_plugin.so.7.1.3? or something else?