There is a error when run deepstream-mrcnn-app

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) jetson Tx2
**• DeepStream Version 5.0
**• JetPack Version (valid for Jetson only) jetpack 4.4
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Sample application and the configuration file content
• Reproduce steps
• Reproducing rate and duration

when i run the sample of deepstream-mrcnn-app as below , i meet a error: unable to open the shared library, libnvds_amqp_proto.so.

would anyone like to give me help?

$ deepstream-mrcnn-app -c cfg_amqp.txt -i sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_amqp_proto.so
Now playing: sample_720p.h264

Using winsys: x11

(deepstream-mrcnn-app:10007): GLib-CRITICAL **: 17:08:55.642: g_strrstr: assertion ‘haystack != NULL’ failed
Running…
ERROR from element nvmsg-broker: Could not initialize supporting library.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgbroker/gstnvmsgbroker.c(359): legacy_gst_nvmsgbroker_start (): /GstPipeline:dsmrcnn-pipeline/GstNvMsgBroker:nvmsg-broker:
unable to open shared library
Returned, stopping playback
Deleting pipeline

please specify --conn-str
For Amqp - Connection string of format: host;port;username;password
–topic the topic you created

Thanks for your reply.

there is still errors by --conn-str=localhost;5672;guest;guest

i set the " --con-str" according to the cfg_amqp.txt, am i right?

how to set the value of --conn-str?
would you like to give me a example of --conn-str?

below is the errors:
$ deepstream-mrcnn-app -i sample_720p.h264 -p libnvds_amqp_proto.so -c cfg_amqp.txt --conn-str=localhost;5672;gust;guest -topic=topicname
Now playing: sample_720p.h264

Using winsys: x11
Running…
ERROR from element nvmsg-broker: Could not initialize supporting library.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmsgbroker/gstnvmsgbroker.c(359): legacy_gst_nvmsgbroker_start (): /GstPipeline:dsmrcnn-pipeline/GstNvMsgBroker:nvmsg-broker:
unable to open shared library
Returned, stopping playback
Deleting pipeline
bash: 5672: command not found
bash: gust: command not found
bash: guest: command not found

that is correct if you not create user, queted the string “localhost;5672;gust;guest -topic=topicname” and try again.

there is still a error by --conn-str=“localhost;5672;guest;guest” :
unable to open shared library.

should the port(5672) be like 192.168.1.2?
would you like to give me more suggestion?

below is the error:
$ deepstream-mrcnn-app -i sample_720p.h264 -p libnvds_amqp_proto.so -c cfg_amqp.txt --conn-str=“localhost;5672;guest;guest” -t “topicname”
Now playing: sample_720p.h264

Using winsys: x11
Running…
ERROR from element nvmsg-broker: Could not initialize supporting library.
Error details: gstnvmsgbroker.c(359): legacy_gst_nvmsgbroker_start (): /GstPipeline:dsmrcnn-pipeline/GstNvMsgBroker:nvmsg-broker:
unable to open shared library
Returned, stopping playback
Deleting pipeline

make sure you can find the library in the current path you specified,
-p libnvds_amqp_proto.so
and make sure amqp server service running, check this,
sources/libs/amqp_protocol_adaptor/README

according to the README, i have downloaded the rabbitmq-c.
but ,when cmake there is a error :
could NOT find OpenSSL.

would to like to give me a help again?

bellow the error:
djm@Hartai:~/rabbitmq-c/build$ cmake …
– The C compiler identification is GNU 7.5.0
– Check for working C compiler: /usr/bin/cc
– Check for working C compiler: /usr/bin/cc – works
– Detecting C compiler ABI info
– Detecting C compiler ABI info - done
– Detecting C compile features
– Detecting C compile features - done
– CMAKE_BUILD_TYPE not specified. Creating Release build
– Found C inline keyword: inline
– Looking for getaddrinfo
– Looking for getaddrinfo - found
– Looking for socket
– Looking for socket - found
– Looking for htonll
– Looking for htonll - not found
– Looking for poll
– Looking for poll - found
– Looking for clock_gettime in rt
– Looking for clock_gettime in rt - found
– Looking for posix_spawnp in rt
– Looking for posix_spawnp in rt - found
– Performing Test HAVE_GNU90
– Performing Test HAVE_GNU90 - Success
– Found POPT: /usr/include (found version “1.16”)
– Could NOT find XMLTO (missing: XMLTO_EXECUTABLE)
– Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
– Looking for pthread.h
– Looking for pthread.h - found
– Looking for pthread_create
– Looking for pthread_create - not found
– Looking for pthread_create in pthreads
– Looking for pthread_create in pthreads - not found
– Looking for pthread_create in pthread
– Looking for pthread_create in pthread - found
– Found Threads: TRUE
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY
OPENSSL_INCLUDE_DIR) (Required is at least version “0.9.8”)
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.10/Modules/FindOpenSSL.cmake:390 (find_package_handle_standard_args)
CMakeLists.txt:273 (find_package)

– Configuring incomplete, errors occurred!
See also “/home/djm/rabbitmq-c/build/CMakeFiles/CMakeOutput.log”.
See also “/home/djm/rabbitmq-c/build/CMakeFiles/CMakeError.log”.

sudo apt-get install libssl-dev, try this.

Thanks for your help!

according to your suggestion, it is correct .
after cmake , i can get the librabbitmq.so.4.2.0.

i will follow the further steps.

i meet the “mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine” open error,
when run deepstream-mrcnn-app.

i am using Tx2 with jetpack 4.4.

would you like to give me a help?

below the error:
djm@Hartai:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test$ deepstream-mrcnn-app -i sample-720p.h264 -p libnvds_amqp_proto.so -c cfg_amqp.txt --conn-str=“localhost;5672;guest;guest” --topic=“topicname”
Now playing: sample-720p.h264

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:01.244412876 10486 0x2274ad30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:01.244489804 10486 0x2274ad30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:01.244514828 10486 0x2274ad30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
parseModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.244985069 10486 0x2274ad30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

You can get the models from here,

wget https://nvidia.box.com/shared/static/8k0zpe9gq837wsr0acoy4oh3fdf476gq.zip -O models.zip

which include mrcnn model.

Thanks for your reply.

i will download it and try mrcnn agagin.

i have downloaded the mrcnn model to the path:
/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/mrcnn.

there are two files: mask_rcnn_resnet50.etlt & cal.bin.
do not have the mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine.

after download, i try to run deepstream-mrcnn-app again, there still have the error same as before.
that is “mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine open error”.

how to get the engine file?
would you like to give me help again?

below is the error:

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine open error
0:00:03.260159449 12232 0x2fc8db20 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed
0:00:03.260260248 12232 0x2fc8db20 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-mrcnn-test/…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine failed, try rebuild
0:00:03.260286936 12232 0x2fc8db20 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Validator error: multilevel_propose_rois: Unsupported operation _MultilevelProposeROI_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:04.334584996 12232 0x2fc8db20 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed

Check this,

I followed these instructions and am still receiving the following error:

ERROR: [TRT]: UffParser: Validator error: multilevel_propose_rois: Unsupported operation _MultilevelProposeROI_TRT
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API

I am on jetson nano using L4T 32.4.3 and DS5.0.

Previously I had /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 installed and that didn’t work. So per the above instructions, I ran the following commands:

wget https://nvidia.box.com/shared/static/ezrjriq08q8fy8tvqcswgi0u6yn0bomg.1 -O libnvinfer_plugin.so.7.0.0.1
sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 ${HOME}/libnvinfer_plugin.so.7.1.3.bak
sudo cp libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3

does it means that TensorRT 7.1(Deepstream 5.0) is not suitable for running the deepstream-mrcnn-test?

if down to the TRT 7.0, what of the effect on other samples of deepstream-5.0 ?

i am running the example of “deepstream-mrcnn-test”.
the “dsmrcnn_pgie_config.txt” should be used in that example ,i think.

in the “dsmrcnn_pgie_config.txt”,
the “model-engine-file” should be “mask_rcnn_resnet50.etlt-b1-gpu0-int8.engine”.

but, the error is not found “mask_rcnn_resnet50.etlt-b1-gpu0-int8.engine”.

how to get or download the “mask_rcnn_resnet50.etlt-b1-gpu0-int8.engine”?
should i change the libnvinfer_plugin.so from 7.1.3 to the 7.0.0.1?

below is a part of the “dsmrcnn_pgie_config.txt”:
[property]
net-scale-factor=0.017507
offsets=123.675;116.280;103.53
model-color-format=0
labelfile-path=…/…/…/…/samples/configs/tlt_pretrained_models/mrcnn_labels.txt
tlt-encoded-model=…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt
tlt-model-key=nvidia_tlt
model-engine-file=…/…/…/…/samples/models/tlt_pretrained_models/mrcnn/mask_rcnn_resnet50.etlt_b1_gpu0_int8.engine
int8-calib-file=…/…/models/tlt_pretrained_models/mrcnn/cal.bin

"I am on jetson nano using L4T 32.4.3 and DS5.0.

Previously I had /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 installed and that didn’t work. So per the above instructions, I ran the following commands:"

→ clear the cache, and try again.
rm ~/.cache/gstreamer-1.0/ -rf

it’s not an error, the engine will be built for the first time, after built, it will be saved, and you can use it next time.

how to build the engine?