Hello again @Morganh!
- Here is the config file I am using… it is unchanged. I am just trying to run the example on the github before bringing in my own model from TlT
# Copyright (c) 2018 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# enable-dbscan(Default=false), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=./nvdsinfer_customparser_frcnn_uff/frcnn_labels.txt
#uff-file=./faster_rcnn.uff
#model-engine-file=./faster_rcnn.uff_b1_fp32.engine
tlt-encoded-model=./models/frcnn/faster_rcnn.etlt
tlt-model-key=nvidia_tlt
uff-input-dims=3;272;480;0
uff-input-blob-name=input_1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=./nvdsinfer_customparser_frcnn_uff/libnvds_infercustomparser_frcnn_uff.so
[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
## Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800
- I am using the default models as you can see in the config file.
My one concern is the following step on the github where I need to move over the newly built plugin:
sudo cp `pwd`/out/libnvinfer_plugin.so.5.x.x /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.x.x
Am I only copying one over, or all of them? Here is what I did with my newly build libnvinfer plugin:
sudo cp `pwd`/out/libnvinfer_plugin.so.5.1.5 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6
So I assume I am copying over the contents of the newly created 5.1.5 into the 5.1.6 plug-in since it has special linking files in the aarch64-linux-gnu folder? If my command is incorrect, could you please tell me what the true command/s should be?
- It does not work with a h264 file:
nvidia@nvidia:~/ai/deepstream_4.x_apps$ ./deepstream-custom pgie_frcnn_uff_config.txt sample_1080p_h264.mp4
Now playing: pgie_frcnn_uff_config.txt
Using winsys: x11
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:00.674744631 10185 0x5564122120 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:09.546123900 10185 0x5564122120 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/nvidia/ai/deepstream_4.x_apps/models/frcnn/faster_rcnn.etlt_b1_fp32.engine
Running...
ERROR from element h264-parser: Internal data stream error.
Error details: gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:ds-custom-pipeline/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Using a h264 file gives me a different problem then using the MP4.
Thank you in advance for all your help!