Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
glueck@glueck-WHITLEY:~$ nvidia-smi
Thu Jan 4 16:02:51 2024
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:98:00.0 Off | 0 |
| N/A 36C P8 9W / 70W | 11MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1547 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2306 G /usr/lib/xorg/Xorg 4MiB |
±----------------------------------------------------------------------------+
glueck@glueck-WHITLEY:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
sample stream
/opt/nvidia/deepstream/deepstream-6.2/samples/streams/TopLow.mp4
model
/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app$ sudo deepstream-app -c penang_port_config_source.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::42] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 232, Serialized Engine Version: 205)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8
0:00:03.366906676 6549 0x5558a7a9d350 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8 failed
0:00:03.440623495 6549 0x5558a7a9d350 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/…/…/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8 failed, try rebuild
0:00:03.440649086 6549 0x5558a7a9d350 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
Weights for layer conv1 doesn’t exist
ERROR: [TRT]: CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer conv1 doesn’t exist
ERROR: [TRT]: CaffeParser: ERROR: Attempting to access NULL weights
ERROR: [TRT]: 3: conv1:kernel weights has count 0 but 4704 was expected
ERROR: [TRT]: 4: conv1: count of 0 weights in kernel, but kernel dimensions (7,7) with 3 input channels, 32 output channels and 1 groups were specified. Expected Weights count is 3 * 7*7 * 32 / 1 = 4704
ERROR: [TRT]: 4: [convolutionNode.cpp::computeOutputExtents::58] Error Code 4: Internal Error (conv1: number of kernel weights does not match tensor dimensions)
deepstream-app: /_src/parsers/parserHelper.h:76: nvinfer1::Dims3 parserhelper::getCHW(const Dims&): Assertion `d.nbDims >= 3’ failed.
Aborted
this is what i got from samples. model is
/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detection/Primary_Detector/resnet18_detector.trt.int8
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ ./deepstream-nvdsanalytics-test nvdsanalytics_pgie_config_int8.txtWarn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: nvdsanalytics_pgie_config_int8.txt,
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:00.337463544 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.5.1.7 got 8.5.2.2, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
0:00:03.344712858 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed
0:00:03.402577338 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed, try rebuild
0:00:03.402602070 7801 0x5654aa4022f0 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine opened error
0:00:17.719230625 7801 0x5654aa4022f0 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x23x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x23x40
0:00:17.825882777 7801 0x5654aa4022f0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:nvdsanalytics_pgie_config_int8.txt sucessfully
[NvMultiObjectTracker] De-initialized
Running…
ERROR from element uri-decode-bin: Invalid URI “nvdsanalytics_pgie_config_int8.txt”.
Error details: gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:nvdsanalytics-test-pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin
Returned, stopping playback
Deleting pipeline
Details about my server
glueck@glueck-WHITLEY:~$ nvidia-smi
Tue Jan 9 10:39:26 2024
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:98:00.0 Off | 0 |
| N/A 40C P8 16W / 70W | 11MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1521 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2260 G /usr/lib/xorg/Xorg 4MiB |
±----------------------------------------------------------------------------+
deepstream 6.2
glueck@glueck-WHITLEY:~$ dpkg -l | grep nvinfer
ii libnvinfer-bin 8.5.1-1+cuda11.8 amd64 TensorRT binaries
ii libnvinfer-dev 8.5.1-1+cuda11.8 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.1-1+cuda11.8 amd64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.1-1+cuda11.8 amd64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.1-1+cuda11.8 all TensorRT samples
ii libnvinfer8 8.5.1-1+cuda11.8 amd64 TensorRT runtime libraries
ii python3-libnvinfer 8.5.1-1+cuda11.8 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.1-1+cuda11.8 amd64 Python 3 development package for TensorRT
path to files
glueck@glueck-WHITLEY:~$ cd /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8$ ls
calibration.bin resnet18_detector.etlt_b1_gpu0_fp32.engine
calibration.tensor resnet18_detector.trt
labels.txt resnet18_detector.trt.int8
resnet18_detector.etlt
apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
[sudo] password for glueck:
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
if “/opt/nvidia/deepstream/deepstream/lib/libnvdsgst_meta.so” exists, please “export LD_LIBRARY_PATH=/opt/nvidia/deepstream/deepstream/lib/:$LD_LIBRARY_PATH” first, then try again.
lueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
[sudo] password for glueck:
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ export LD_LIBRARY_PATH=/opt/nvidia/deepstream/deepstream/lib/:$LD_LIBRARY_PATH
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ sudo ./deepstream-nvdsanalytics-test file:///home/glueck/Downloads/TopLow.mp4
./deepstream-nvdsanalytics-test: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt$ ./deepstream-nvdsanalytics-test -c /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt/nvdsanalytics_pgie_config_int8.txt
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: -c, /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-container-detection-etlt/nvdsanalytics_pgie_config_int8.txt,
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:00.933086851 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.5.1.7 got 8.5.2.2, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1533 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine
0:00:04.651836474 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed
0:00:04.702026444 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b1_gpu0_fp32.engine failed, try rebuild
0:00:04.702049823 5979 0x7fd3d8002380 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1459 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b2_gpu0_fp32.engine opened error
0:00:21.324286444 5979 0x7fd3d8002380 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1950> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Container_Detector/int8/resnet18_detector.etlt_b2_gpu0_fp32.engine
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x23x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x23x40
0:00:21.425099695 5979 0x7fd3d8002380 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:nvdsanalytics_pgie_config_int8.txt sucessfully
[NvMultiObjectTracker] De-initialized
Running…
ERROR from element uri-decode-bin: Invalid URI “-c”.
Error details: gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:nvdsanalytics-test-pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin
Returned, stopping playback
Deleting pipeline
glueck@glueck-WHITLEY:/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1$ sudo ./deepstream-test1-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Added elements to bin
Using file: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: …/nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.350120541 6410 0x55f0a1ed6a90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.400064536 6410 0x55f0a1ed6a90 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-test1/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.400135298 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:00:30.430116008 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:30.528749517 6410 0x55f0a1ed6a90 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running…
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Number of objects = 12 Vehicle Count = 8 Person Count = 4
0:00:31.136899894 6410 0x55f0a0946300 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:31.136917464 6410 0x55f0a0946300 WARN nvinfer gstnvinfer.cpp:2369:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(2369): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Frame Number = 1 Number of objects = 11 Vehicle Count = 8 Person Count = 3
Frame Number = 2 Number of objects = 11 Vehicle Count = 7 Person Count = 4
nvstreammux: Successfully handled EOS for source_id=0
Frame Number = 3 Number of objects = 13 Vehicle Count = 8 Person Count = 5
Frame Number = 4 Number of objects = 12 Vehicle Count = 8 Person Count = 4
Frame Number = 5 Number of objects = 12 Vehicle Count = 8 Person Count = 4
Frame Number = 6 Number of objects = 11 Vehicle Count = 7 Person Count = 4
Deleting pipeline
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
that libnvdsgst_meta.so is valid. please do “rm -rf ~/.cache/gstreamer-1.0/” first, then try again.
if still can’t work, please share the result of “ldd /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so” and “ldd /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_dsanalytics.so”. Thanks!