Deepstream 6.2 Engine File Problem, RUNNING MODEL on deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

glueck@glueck-WHITLEY:~$ nvidia-smi
Thu Jan 4 16:02:51 2024
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:98:00.0 Off | 0 |
| N/A 36C P8 9W / 70W | 11MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1547 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2306 G /usr/lib/xorg/Xorg 4MiB |
±----------------------------------------------------------------------------+
glueck@glueck-WHITLEY:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

penang_port_config_source.txt (5.7 KB)

this is the sample config we did

sample stream
/opt/nvidia/deepstream/deepstream-6.2/samples/streams/TopLow.mp4
model
/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8

how to run the deepstream to our sample video

  1. copy penang_port_config_source.txt to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app.
  2. enter this directory, then correct some paths in penang_port_config_source.txt if needed.
  3. execute deepatrea-app -c penang_port_config_source.txt

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app$ sudo deepstream-app -c penang_port_config_source.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::42] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 232, Serialized Engine Version: 205)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8
0:00:03.366906676 6549 0x5558a7a9d350 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8 failed
0:00:03.440623495 6549 0x5558a7a9d350 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8 failed, try rebuild
0:00:03.440649086 6549 0x5558a7a9d350 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Weights for layer conv1 doesn’t exist
ERROR: [TRT]: CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer conv1 doesn’t exist
ERROR: [TRT]: CaffeParser: ERROR: Attempting to access NULL weights
ERROR: [TRT]: 3: conv1:kernel weights has count 0 but 4704 was expected
ERROR: [TRT]: 4: conv1: count of 0 weights in kernel, but kernel dimensions (7,7) with 3 input channels, 32 output channels and 1 groups were specified. Expected Weights count is 3 * 7*7 * 32 / 1 = 4704
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::58] Error Code 4: Internal Error (conv1: number of kernel weights does not match tensor dimensions)
deepstream-app: /_src/parsers/parserHelper.h:76: nvinfer1::Dims3 parserhelper::getCHW(const Dims&): Assertion `d.nbDims >= 3’ failed.
Aborted

labels.txt (10 Bytes)
calibration bin :
TRT-8205-EntropyCalibration2
input_1: 3c010a14
conv1/convolution: 3d6bd317
conv1/BiasAdd: 3d55f3fd
bn_conv1/batchnorm/mul_1: 3de26890
bn_conv1/batchnorm/add_1: 3d8a7aff
activation_1/Relu: 3d8b0100
block_1a_conv_1/convolution: 3ec99e14
block_1a_conv_1/BiasAdd: 3ec9ea95
block_1a_bn_1/batchnorm/mul_1: 3dc731b4
block_1a_bn_1/batchnorm/add_1: 3d94c680
block_1a_relu_1/Relu: 3d141b26
block_1a_conv_2/convolution: 3dc2525f
block_1a_conv_2/BiasAdd: 3dd7b8cf
block_1a_bn_2/batchnorm/mul_1: 3d4a6896
block_1a_bn_2/batchnorm/add_1: 3d55f12d
block_1a_conv_shortcut/convolution: 3d94cf8c
block_1a_conv_shortcut/BiasAdd: 3d913998
block_1a_bn_shortcut/batchnorm/mul_1: 3d46be41
block_1a_bn_shortcut/batchnorm/add_1: 3d49a3e4
add_1/add: 3dd6d4ae
block_1a_relu/Relu: 3d207c71
block_1b_conv_1/convolution: 3e06e813
block_1b_conv_1/BiasAdd: 3e06d1e3
block_1b_bn_1/batchnorm/mul_1: 3d80d79c
block_1b_bn_1/batchnorm/add_1: 3d80a95b
block_1b_relu_1/Relu: 3d5d1bcf
block_1b_conv_2/convolution: 3dd80d27
block_1b_conv_2/BiasAdd: 3dbba9e5
block_1b_bn_2/batchnorm/mul_1: 3db5ada3
block_1b_bn_2/batchnorm/add_1: 3d8f0737
add_2/add: 3daa13c6
block_1b_relu/Relu: 3db7db60
block_2a_conv_1/convolution: 3e8206af
block_2a_conv_1/BiasAdd: 3e820a36
block_2a_bn_1/batchnorm/mul_1: 3d8f551d
block_2a_bn_1/batchnorm/add_1: 3d2ef23b
block_2a_relu_1/Relu: 3d51ffdd
block_2a_conv_2/convolution: 3dfb1308
block_2a_conv_2/BiasAdd: 3dfb0cd0
block_2a_bn_2/batchnorm/mul_1: 3d85fc37
block_2a_bn_2/batchnorm/add_1: 3d74dcd4
block_2a_conv_shortcut/convolution: 3da44cfc
block_2a_conv_shortcut/BiasAdd: 3da33f73
block_2a_bn_shortcut/batchnorm/mul_1: 3d33f03c
block_2a_bn_shortcut/batchnorm/add_1: 3cebfa1f
add_3/add: 3d706a20
block_2a_relu/Relu: 3d937f16
block_2b_conv_1/convolution: 3e26f1c2
block_2b_conv_1/BiasAdd: 3e1b5620
block_2b_bn_1/batchnorm/mul_1: 3d5d5e8f
block_2b_bn_1/batchnorm/add_1: 3d4331f4
block_2b_relu_1/Relu: 3d2b79a8
block_2b_conv_2/convolution: 3dc49122
block_2b_conv_2/BiasAdd: 3dceb7ee
block_2b_bn_2/batchnorm/mul_1: 3daf82d9
block_2b_bn_2/batchnorm/add_1: 3d9814fa
add_4/add: 3dc4f208
block_2b_relu/Relu: 3e003f8a
block_3a_conv_1/convolution: 3e876b13
block_3a_conv_1/BiasAdd: 3e876c00
block_3a_bn_1/batchnorm/mul_1: 3d90bb82
block_3a_bn_1/batchnorm/add_1: 3d6c851a
block_3a_relu_1/Relu: 3d95037d
block_3a_conv_2/convolution: 3e0dbbaf
block_3a_conv_2/BiasAdd: 3e0dbc23
block_3a_bn_2/batchnorm/mul_1: 3d97c8ca
block_3a_bn_2/batchnorm/add_1: 3d2513cb
block_3a_conv_shortcut/convolution: 3d9060d7
block_3a_conv_shortcut/BiasAdd: 3d905f6a
block_3a_bn_shortcut/batchnorm/mul_1: 3d8e26aa
block_3a_bn_shortcut/batchnorm/add_1: 3ce3626c
add_5/add: 3d4b9799
block_3a_relu/Relu: 3deb833c
block_3b_conv_1/convolution: 3e32cd30
block_3b_conv_1/BiasAdd: 3e32cc4a
block_3b_bn_1/batchnorm/mul_1: 3d9984c2
block_3b_bn_1/batchnorm/add_1: 3d96e003
block_3b_relu_1/Relu: 3da2220c
block_3b_conv_2/convolution: 3de8c09d
block_3b_conv_2/BiasAdd: 3de8c280
block_3b_bn_2/batchnorm/mul_1: 3db43ce6
block_3b_bn_2/batchnorm/add_1: 3d88fb4d
add_6/add: 3de0d8ed
block_3b_relu/Relu: 3e317524
block_4a_conv_1/convolution: 3eb88f8c
block_4a_conv_1/BiasAdd: 3eb89070
block_4a_bn_1/batchnorm/mul_1: 3dd599b5
block_4a_bn_1/batchnorm/add_1: 3dd75a61
block_4a_relu_1/Relu: 3dcf7692
block_4a_conv_2/convolution: 3e6ae5d7
block_4a_conv_2/BiasAdd: 3e6a7501
block_4a_bn_2/batchnorm/mul_1: 3d87e85e
block_4a_bn_2/batchnorm/add_1: 3d587648
block_4a_conv_shortcut/convolution: 3e3c9b2c
block_4a_conv_shortcut/BiasAdd: 3e3c9b81
block_4a_bn_shortcut/batchnorm/mul_1: 3d7a5f9d
block_4a_bn_shortcut/batchnorm/add_1: 3d5df7b7
add_7/add: 3d3f122c
block_4a_relu/Relu: 3e00742f
block_4b_conv_1/convolution: 3e9f2155
block_4b_conv_1/BiasAdd: 3e9f1db3
block_4b_bn_1/batchnorm/mul_1: 3d97cde4
block_4b_bn_1/batchnorm/add_1: 3dab427b
block_4b_relu_1/Relu: 3d98ec81
block_4b_conv_2/convolution: 3f1b1584
block_4b_conv_2/BiasAdd: 3f1b14a3
block_4b_bn_2/batchnorm/mul_1: 3da4803d
block_4b_bn_2/batchnorm/add_1: 3d757eaf
add_8/add: 3e06994f
block_4b_relu/Relu: 3e001b8a
output_bbox/convolution: 3dafabc7
output_bbox/BiasAdd: 3db30f23
output_cov/convolution: 3e502c98
output_cov/BiasAdd: 3e5c1d4e
output_cov/Sigmoid: 3becf50b

i need to create proto txt:
layer {
name: “input_1”
type: “Input”
top: “input_1”
input_param {
shape {
dim: 1
dim: 3
dim: 368
dim: 640
}
}
}
layer {
name: “conv1”
type: “Convolution”
bottom: “input_1”
top: “conv1”
convolution_param {
num_output: 32
pad_h: 3
pad_w: 3
kernel_h: 7
kernel_w: 7
stride_h: 2
stride_w: 2
}
}
layer {
name: “bn_conv1”
type: “Scale”
bottom: “conv1”
top: “bn_conv1”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “activation_1/Relu”
type: “ReLU”
bottom: “bn_conv1”
top: “activation_1/Relu”
}
layer {
name: “block_1a_conv_1”
type: “Convolution”
bottom: “activation_1/Relu”
top: “block_1a_conv_1”
convolution_param {
num_output: 64
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_1a_conv_shortcut”
type: “Convolution”
bottom: “activation_1/Relu”
top: “block_1a_conv_shortcut”
convolution_param {
num_output: 64
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_1a_bn_1”
type: “Scale”
bottom: “block_1a_conv_1”
top: “block_1a_bn_1”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “block_1a_bn_shortcut”
type: “Scale”
bottom: “block_1a_conv_shortcut”
top: “block_1a_bn_shortcut”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “activation_2/Relu”
type: “ReLU”
bottom: “block_1a_bn_1”
top: “activation_2/Relu”
}
layer {
name: “block_1a_conv_2”
type: “Convolution”
bottom: “activation_2/Relu”
top: “block_1a_conv_2”
convolution_param {
num_output: 64
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_1a_bn_2”
type: “Scale”
bottom: “block_1a_conv_2”
top: “block_1a_bn_2”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “add_1”
type: “Eltwise”
bottom: “block_1a_bn_2”
bottom: “block_1a_bn_shortcut”
top: “add_1”
eltwise_param {
operation: SUM
}
}
layer {
name: “activation_3/Relu”
type: “ReLU”
bottom: “add_1”
top: “activation_3/Relu”
}
layer {
name: “block_2a_conv_1”
type: “Convolution”
bottom: “activation_3/Relu”
top: “block_2a_conv_1”
convolution_param {
num_output: 128
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_2a_conv_shortcut”
type: “Convolution”
bottom: “activation_3/Relu”
top: “block_2a_conv_shortcut”
convolution_param {
num_output: 128
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_2a_bn_1”
type: “Scale”
bottom: “block_2a_conv_1”
top: “block_2a_bn_1”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “block_2a_bn_shortcut”
type: “Scale”
bottom: “block_2a_conv_shortcut”
top: “block_2a_bn_shortcut”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “activation_4/Relu”
type: “ReLU”
bottom: “block_2a_bn_1”
top: “activation_4/Relu”
}
layer {
name: “block_2a_conv_2”
type: “Convolution”
bottom: “activation_4/Relu”
top: “block_2a_conv_2”
convolution_param {
num_output: 128
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_2a_bn_2”
type: “Scale”
bottom: “block_2a_conv_2”
top: “block_2a_bn_2”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “add_2”
type: “Eltwise”
bottom: “block_2a_bn_2”
bottom: “block_2a_bn_shortcut”
top: “add_2”
eltwise_param {
operation: SUM
}
}
layer {
name: “activation_5/Relu”
type: “ReLU”
bottom: “add_2”
top: “activation_5/Relu”
}
layer {
name: “block_3a_conv_1”
type: “Convolution”
bottom: “activation_5/Relu”
top: “block_3a_conv_1”
convolution_param {
num_output: 232
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_3a_conv_shortcut”
type: “Convolution”
bottom: “activation_5/Relu”
top: “block_3a_conv_shortcut”
convolution_param {
num_output: 200
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: “block_3a_bn_1”
type: “Scale”
bottom: “block_3a_conv_1”
top: “block_3a_bn_1”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “block_3a_bn_shortcut”
type: “Scale”
bottom: “block_3a_conv_shortcut”
top: “block_3a_bn_shortcut”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “activation_6/Relu”
type: “ReLU”
bottom: “block_3a_bn_1”
top: “activation_6/Relu”
}
layer {
name: “block_3a_conv_2”
type: “Convolution”
bottom: “activation_6/Relu”
top: “block_3a_conv_2”
convolution_param {
num_output: 200
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_3a_bn_2”
type: “Scale”
bottom: “block_3a_conv_2”
top: “block_3a_bn_2”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “add_3”
type: “Eltwise”
bottom: “block_3a_bn_2”
bottom: “block_3a_bn_shortcut”
top: “add_3”
eltwise_param {
operation: SUM
}
}
layer {
name: “activation_7/Relu”
type: “ReLU”
bottom: “add_3”
top: “activation_7/Relu”
}
layer {
name: “block_4a_conv_1”
type: “Convolution”
bottom: “activation_7/Relu”
top: “block_4a_conv_1”
convolution_param {
num_output: 152
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_4a_conv_shortcut”
type: “Convolution”
bottom: “activation_7/Relu”
top: “block_4a_conv_shortcut”
convolution_param {
num_output: 176
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_4a_bn_1”
type: “Scale”
bottom: “block_4a_conv_1”
top: “block_4a_bn_1”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “block_4a_bn_shortcut”
type: “Scale”
bottom: “block_4a_conv_shortcut”
top: “block_4a_bn_shortcut”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “activation_8/Relu”
type: “ReLU”
bottom: “block_4a_bn_1”
top: “activation_8/Relu”
}
layer {
name: “block_4a_conv_2”
type: “Convolution”
bottom: “activation_8/Relu”
top: “block_4a_conv_2”
convolution_param {
num_output: 176
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: “block_4a_bn_2”
type: “Scale”
bottom: “block_4a_conv_2”
top: “block_4a_bn_2”
scale_param {
axis: 1
bias_term: true
}
}
layer {
name: “add_4”
type: “Eltwise”
bottom: “block_4a_bn_2”
bottom: “block_4a_bn_shortcut”
top: “add_4”
eltwise_param {
operation: SUM
}
}
layer {
name: “activation_9/Relu”
type: “ReLU”
bottom: “add_4”
top: “activation_9/Relu”
}
layer {
name: “conv2d_bbox”
type: “Convolution”
bottom: “activation_9/Relu”
top: “conv2d_bbox”
convolution_param {
num_output: 16
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: “conv2d_cov”
type: “Convolution”
bottom: “activation_9/Relu”
top: “conv2d_cov”
convolution_param {
num_output: 4
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: “conv2d_cov/Sigmoid”
type: “Sigmoid”
bottom: “conv2d_cov”
top: “conv2d_cov/Sigmoid”
}

this is what i got from samples. model is
/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8

config_infer_primary.txt (4.1 KB)

penang_port_config_source.txt (5.7 KB)

from the error, the app failed to generate TensorRT engine.

  1. can the model work by other test tools?
  2. what are the model’s inputs? what is model type? why model-file is resnet18 while proto-file is resnet10?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ ./deepstream-nvdsanalytics-test nvdsanalytics_pgie_config_int8.txtWarn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: nvdsanalytics_pgie_config_int8.txt,
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:00.337463544 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.5.1.7 got 8.5.2.2, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
0:00:03.344712858 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed
0:00:03.402577338 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.402602070 7801 0x5654aa4022f0 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine opened error
0:00:17.719230625 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x23x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x23x40

0:00:17.825882777 7801 0x5654aa4022f0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:nvdsanalytics_pgie_config_int8.txt sucessfully
[NvMultiObjectTracker] De-initialized
Running…
ERROR from element uri-decode-bin: Invalid URI “nvdsanalytics_pgie_config_int8.txt”.
Error details: gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:nvdsanalytics-test-pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin
Returned, stopping playback
Deleting pipeline

Details about my server
glueck@glueck-WHITLEY:~$ nvidia-smi
Tue Jan 9 10:39:26 2024
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:98:00.0 Off | 0 |
| N/A 40C P8 16W / 70W | 11MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1521 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2260 G /usr/lib/xorg/Xorg 4MiB |
±----------------------------------------------------------------------------+
deepstream 6.2
glueck@glueck-WHITLEY:~$ dpkg -l | grep nvinfer
ii libnvinfer-bin 8.5.1-1+cuda11.8 amd64 TensorRT binaries
ii libnvinfer-dev 8.5.1-1+cuda11.8 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.1-1+cuda11.8 amd64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.1-1+cuda11.8 amd64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.1-1+cuda11.8 all TensorRT samples
ii libnvinfer8 8.5.1-1+cuda11.8 amd64 TensorRT runtime libraries
ii python3-libnvinfer 8.5.1-1+cuda11.8 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.1-1+cuda11.8 amd64 Python 3 development package for TensorRT

path to files
glueck@glueck-WHITLEY:~$ cd /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8$ ls
calibration.bin resnet18_detector.etlt_b1_gpu0_fp32.engine
calibration.tensor resnet18_detector.trt
labels.txt resnet18_detector.trt.int8
resnet18_detector.etlt

nvdsanalytics_pgie_config_qat_int8.txt (3.6 KB)

nvdsanalytics_pgie_config_int8.txt (3.6 KB)
nvdsanalytics_pgie_config_fp32.txt (3.4 KB)
config_nvdsanalytics.txt (3.0 KB)

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ ls
config_nvdsanalytics.txt Makefile
deepstream_app_main.c Makefile.ds
deepstream_nvdsanalytics_meta.cpp nvdsanalytics_pgie_config_fp32.txt
deepstream-nvdsanalytics-test nvdsanalytics_pgie_config_int8.txt
deepstream_nvdsanalytics_test.cpp nvdsanalytics_pgie_config_qat_int8.txt
deepstream_nvdsanalytics_test.o README

why am i getting error and how to run the container model on sample apps

from the error, the start command-line is wrong. please refer to part “4. Usage:” in deepstream-nvdsanalytics-test/README for how to run.

===============================================================================
4. Usage:

To run:
$ ./deepstream-nvdsanalytics-test [uri2] … [uriN]
e.g.
$ ./deepstream-nvdsanalytics-test file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4
$ ./deepstream-nvdsanalytics-test rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2

/home/glueck/Downloads/TopLow.mp4

this is the path, and video how to run cause getting error

please try ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4

apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
[sudo] password for glueck:
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory

if “/opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so” exists, please “export LD_LIBRARY_PATH=/opt/nvidia/deepstream/deepstream/lib/:$LD_LIBRARY_PATH” first, then try again.

lueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
[sudo] password for glueck:
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ export LD_LIBRARY_PATH=/opt/nvidia/deepstream/deepstream/lib/:$LD_LIBRARY_PATH
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ ./deepstream-nvdsanalytics-test -c /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt/nvdsanalytics_pgie_config_int8.txt
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: -c, /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt/nvdsanalytics_pgie_config_int8.txt,
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:00.933086851 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.5.1.7 got 8.5.2.2, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
0:00:04.651836474 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed
0:00:04.702026444 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed, try rebuild
0:00:04.702049823 5979 0x7fd3d8002380 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b2_gpu0_fp32.engine opened error
0:00:21.324286444 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b2_gpu0_fp32.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x23x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x23x40

0:00:21.425099695 5979 0x7fd3d8002380 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:nvdsanalytics_pgie_config_int8.txt sucessfully
[NvMultiObjectTracker] De-initialized
Running…
ERROR from element uri-decode-bin: Invalid URI “-c”.
Error details: gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:nvdsanalytics-test-pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin
Returned, stopping playback
Deleting pipeline

  1. could you share the result of “ll /opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so”?
  2. can the simplest sample deepstream-teset1 run well?
  1. iner-detection-etlt$ /opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so
    Segmentation fault (core dumped)

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1$ ls
deepstream-test1-app dstest1_config.yml dstest1_pgie_nvinferserver_config.txt
deepstream_test1_app.c dstest1_pgie_config.txt Makefile
deepstream_test1_app.o dstest1_pgie_config.yml README

should i try running this, can you guide it

  1. sorry, could you share the result of “ll /opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so”? wondering if this so exists.
  2. please run ./deepstream-test1-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1$ ll /opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so
-rwxr-xr-x 1 root root 23096 Jan 13 2023 /opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so*

glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1$ sudo ./deepstream-test1-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Added elements to bin
Using file: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.350120541 6410 0x55f0a1ed6a90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.400064536 6410 0x55f0a1ed6a90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.400135298 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:00:30.430116008 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:30.528749517 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Number of objects = 12 Vehicle Count = 8 Person Count = 4
0:00:31.136899894 6410 0x55f0a0946300 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:31.136917464 6410 0x55f0a0946300 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(2369): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Frame Number = 1 Number of objects = 11 Vehicle Count = 8 Person Count = 3
Frame Number = 2 Number of objects = 11 Vehicle Count = 7 Person Count = 4
nvstreammux: Successfully handled EOS for source_id=0
Frame Number = 3 Number of objects = 13 Vehicle Count = 8 Person Count = 5
Frame Number = 4 Number of objects = 12 Vehicle Count = 8 Person Count = 4
Frame Number = 5 Number of objects = 12 Vehicle Count = 8 Person Count = 4
Frame Number = 6 Number of objects = 11 Vehicle Count = 7 Person Count = 4
Deleting pipeline

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

  1. that libnvdsgst_meta.so is valid. please do “rm -rf ~/.cache/gstreamer-1.0/” first, then try again.
  2. if still can’t work, please share the result of “ldd /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so” and “ldd /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_dsanalytics.so”. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.