Deepstream_4.x_apps not running on Xavier with Jetpack 4.2.2

Hello,

After following all the setup scrips, I would like to run the sample FRCNN model that comes with the repository. No config.txt files were changed… Here is the output:

nvidia@nvidia:~/ai/deepstream_4.x_apps$ ./deepstream-custom pgie_frcnn_uff_config.txt sample_720p.mp4 
Now playing: pgie_frcnn_uff_config.txt

Using winsys: x11 
Opening in BLOCKING MODE 
Creating LL OSD context new
0:00:01.575456100  9137   0x559d69c120 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:12.128553910  9137   0x559d69c120 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/nvidia/ai/deepstream_4.x_apps/models/frcnn/faster_rcnn.etlt_b1_fp32.engine
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

It will stay stuck on there and not do anything. This really doesn’t help me debug the issue. Also, clearing cache does not change anything.

What else should I do?

Hi,
Please share which Jetson platform you are using. Also please try to run deepstream-app:

samples/configs/deepstream-app$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

Would like to know whether the default app is running well or not.

Hello @Dane,

The Jetson platform I am using is the Xavier. Also, the sample works perfectly fine.

Still no luck with the initial problem…

Hi,
In source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt, [primary-gie] is configured to

[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=8
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

We would suggest you replace config_infer_primary_nano.txt with pgie_frcnn_uff_config.txt and give it a try. Please ensure pgie_frcnn_uff_config.txt is with same format as config_infer_primary_nano.txt

Hi mbufi,
Please paste your pgie_frcnn_uff_config.txt here.

Move the topic from DS forum into TLT forum.

Hi mbufi,
Can you follow the https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps/ step by step along with its default attached models to verify?

A sample .etlt model is available at models/frcnn/faster_rcnn.etlt

More, I see that you run with mp4 file. Can you change to run with h264 file?

./deepstream-custom <config_file> <H264_file>

Hello again @Morganh!

  1. Here is the config file I am using… it is unchanged. I am just trying to run the example on the github before bringing in my own model from TlT
# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_uff/frcnn_labels.txt
#uff-file=./faster_rcnn.uff
#model-engine-file=./faster_rcnn.uff_b1_fp32.engine
tlt-encoded-model=./models/frcnn/faster_rcnn.etlt
tlt-model-key=nvidia_tlt
uff-input-dims=3;272;480;0
uff-input-blob-name=input_1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=./nvdsinfer_customparser_frcnn_uff/libnvds_infercustomparser_frcnn_uff.so

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
  1. I am using the default models as you can see in the config file.
    My one concern is the following step on the github where I need to move over the newly built plugin:
sudo cp `pwd`/out/libnvinfer_plugin.so.5.x.x /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.x.x

Am I only copying one over, or all of them? Here is what I did with my newly build libnvinfer plugin:

sudo cp `pwd`/out/libnvinfer_plugin.so.5.1.5 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6

So I assume I am copying over the contents of the newly created 5.1.5 into the 5.1.6 plug-in since it has special linking files in the aarch64-linux-gnu folder? If my command is incorrect, could you please tell me what the true command/s should be?

  1. It does not work with a h264 file:
nvidia@nvidia:~/ai/deepstream_4.x_apps$ ./deepstream-custom pgie_frcnn_uff_config.txt sample_1080p_h264.mp4 
Now playing: pgie_frcnn_uff_config.txt

Using winsys: x11 
Opening in BLOCKING MODE 
Creating LL OSD context new
0:00:00.674744631 10185   0x5564122120 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:09.546123900 10185   0x5564122120 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/nvidia/ai/deepstream_4.x_apps/models/frcnn/faster_rcnn.etlt_b1_fp32.engine
Running...
ERROR from element h264-parser: Internal data stream error.
Error details: gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:ds-custom-pipeline/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback

Using a h264 file gives me a different problem then using the MP4.

Thank you in advance for all your help!

Are you using h264 file? From your command it is sample_1080p_h264.mp4 instead.
I recall that DS contains sample_720p.h264 by default.

@Morganh, that solves the problem! Using that .h264 file allows me to play the deepstream pipeline…

But its extremely choppy and skips frames. Do you know why that may be? This is running on an Xavier with Jetsonclocks and MAXN power mode.

Good to know h264 file works.

For choppy and skips frames, can you try your trianed model? I’m afraid what you seen results from the demo model in the github.