Hi guys, I just wanted to take a look at the deepstream SDK, but I’m getting stuck on executing the samples.
First of all here are my System-Informations:
• Hardware Platform (Jetson / GPU) → dGPU (GeForce RTX 3070 Laptop GPU)
• DeepStream Version → 6.1
• JetPack Version (valid for Jetson only) → —
• TensorRT Version → 8.4
• NVIDIA GPU Driver Version (valid for GPU only) → 510.47.03
• Issue Type( questions, new requirements, bugs) → questions, requirements
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) → —
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) → —
When I try to run the samples, they seem to execute and work (so there is no immediate shutdown i.e.), but after some time they brake up and throw errors.
Output for deepstream-test1 (Cuda ver 11.6 set in Makefile)
../deepstream/sources/apps/sample_apps/deepstream-test1$ ./deepstream-test1-app dstest1_config.yml
Using file: dstest1_config.yml
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.195264802 78089 0x5582a74656c0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.266011245 78089 0x5582a74656c0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.266032057 78089 0x5582a74656c0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 110.9.2
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1454 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:23.213474136 78089 0x5582a74656c0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1941> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:23.288412926 78089 0x5582a74656c0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.yml sucessfully
Running...
Error String : Feature not supported on this GPUError Code : 801
ERROR from element nvv4l2-decoder: Failed to process frame.
Error details: gstv4l2videodec.c(1747): gst_v4l2_video_dec_handle_frame (): /GstPipeline:dstest1-pipeline/nvv4l2decoder:nvv4l2-decoder:
Maybe be due to not enough memory or failing driver
Returned, stopping playback
Deleting pipeline
Output for gstreamer-pipeline (according to FAQ - How can I construct the DeepStream GStreamer pipeline?)
.../deepstream/samples$ gst-launch-1.0 filesrc location = ./streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder \
> ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! nvinfer config-file-path= ./configs/deepstream-app/config_infer_primary.txt \
> ! dsexample full-frame=1 ! nvvideoconvert ! nvdsosd ! nveglglessink sync=0
Setting pipeline to PAUSED ...
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:01.220016547 78919 0x563bd7c83130 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:01.291913687 78919 0x563bd7c83130 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:01.291935897 78919 0x563bd7c83130 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 110.9.2
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1454 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine opened error
0:00:32.412411849 78919 0x563bd7c83130 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1941> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:32.488292581 78919 0x563bd7c83130 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:./configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Error String : Feature not supported on this GPUError Code : 801
ERROR: from element /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0: Failed to process frame.
Additional debug info:
gstv4l2videodec.c(1747): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0:
Maybe be due to not enough memory or failing driver
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
It seems like there is a Problem with the nvv4l2-decoder…
but to get the whole picture here are some missing implementation-details accordingly to the Deepstream-SDK:
System: Ubuntu 20.04 (upgraded from 18.04, no fresh install)
CUDA: 11.6.2
Gstreamer: 1.16.2
TensorRT: 8.4.0.6-1+cuda11.6
-
I know I could (and probably should) have to check if the problem dissapears when Ubuntu is freshly installed instead of just upgraded… but since I’ve been busy since last week on this topic I just wanted to ensure that the freshy-step is worthwhile. Besides that here are some principal thoughts/questions:
-
The used GeForce RTX 3070 Laptop GPU, should be capable and valid for Deepstream 6.1. Or might this be the cause of the problem?
-
As you can see, I installed mistakly CUDA 11.6.2 instead of 11.6.1. But I assume that this minor Patch should be backwards-compatibile and no Problem at all or is the Deeptstream-SDK actual that restrictive?
-
Although the TensorRT version differs from the one mentioned in the Deepstream-SDK… Well in my defense: I tried to install TensorRT 8.2.5.1, but it won’t succeed (only till CUDA 11.5) because their is no support from CUDA 11.6. (which actually is needed by deepstream). Maybe someone can tell me how to fulfill the mentioned version-requirements anyways… Or should they be that specific different Versions of Cuda: one for TRT and the other one for DS?
-
As I was searching for similar Problems in this Forum and Google and not found any that comparable entries: Where is my fault? Am I missing some little stupid thing?
Thanks in advance and best regards