After installing for dGPU following instructions here, I try Run the deepstream-app (the reference application)
And get the same error for all configurations:
** ERROR: <main:658>: Failed to set pipeline to PAUSED
Quitting
ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2779): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed
The I compiled sampele ./deepstream-test1-app ./sample_1080p_h264.mp4 and it hangs after this verbose:
Now playing: ./sample_1080p_h264.mp4
0:00:00.819220941 8215 0x5624614c7a30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/david/nvidia/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:00.819284225 8215 0x5624614c7a30 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/david/nvidia/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.819972164 8215 0x5624614c7a30 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running...
And my last try, when making deepstream-segmentation-test I get:
sudo make
cc -c -o deepstream_segmentation_app.o -I../../../includes -I /usr/local/cuda-11.4/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include deepstream_segmentation_app.c
In file included from deepstream_segmentation_app.c:30:0:
/usr/local/cuda-11.4/include/cuda_runtime_api.h:147:10: fatal error: crt/host_defines.h: No such file or directory
#include "crt/host_defines.h"
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:63: recipe for target 'deepstream_segmentation_app.o' failed
make: *** [deepstream_segmentation_app.o] Error 1
ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2779): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
Which GPU you are using? can you get nvidia-smi to run?
The include problem solved by changing CUDA-VER to 11.6 in the Makefile BUT now getting cuda errors:
./deepstream-segmentation-app dstest_segmentation_config_semantic.txt sample_720p.mjpeg sample_720p.mjpeg
Now playing: sample_720p.mjpeg, sample_720p.mjpeg,
0:00:00.212244533 21668 0x55575f8b7270 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:00:06.956276277 21668 0x55575f8b7270 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/david/nvidia/deepstream-6.0/samples/models/Segmentation/semantic/unetres18_v4_pruned0.65_800_data.uff_b2_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x512x512
1 OUTPUT kFLOAT final_conv/BiasAdd 4x512x512
0:00:06.987146207 21668 0x55575f8b7270 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest_segmentation_config_semantic.txt sucessfully
Running...
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)512, height=(int)512
Cuda failure: status=1 in cuResData at line 316
Cuda failure: status=1 in cuResData at line 348
Cuda failure: status=1 in cuResData at line 316
Cuda failure: status=1 in cuResData at line 348
ERROR: nvdsinfer_context_impl.cpp:341 Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: nvdsinfer_context_impl.cpp:1619 Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:07.256931948 21668 0x55575f84b590 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337
Sure! nvidia-smi runs and I use all the time… The GPU is GTX1080ti
So which of the sample files is proper for the test?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I reisnstalled Ubuntu twice and followed the instructions twice and yet somehow I end up with a multitude of CUDA versions although the cuda directory does point to CUDA11.6
And then follow the text I wrote in my original post.
Is there paid support for Deepstream? Can I trust it for an enterprise client’s project?
Yes! Two months without any interest on your part, after your previous involvement was to ask totally irrelevant questions !!!
This issue is still important to me so that we make a good technical decision, and I am willing to invest some time on this, But right now, because NVIDIA pretty much abandoned us, we are basing or project on Intel’s FPGA pipeline and we have been able to advance there quite well, with great ongoing support including free telephone support.
Deepstream support on the other hand has been non-existing.
JetPack 4.6.1 includes CUDA 10.2 meanining deepstream then is not compatible with the jetson platform!!!
But, in any case, I have a new install, and after followin these instruction, I have a soup of CUDA versions on my computer, ALL INSTALLED BY NVIDIA DRIVERS AND TOOLS…
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks