New installation Multiple Failues

After installing for dGPU following instructions here, I try Run the deepstream-app (the reference application)

And get the same error for all configurations:

** ERROR: <main:658>: Failed to set pipeline to PAUSED
Quitting

ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2779): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer
App run failed

The I compiled sampele ./deepstream-test1-app ./sample_1080p_h264.mp4 and it hangs after this verbose:

Now playing: ./sample_1080p_h264.mp4
0:00:00.819220941  8215 0x5624614c7a30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/david/nvidia/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:00.819284225  8215 0x5624614c7a30 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/david/nvidia/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:00.819972164  8215 0x5624614c7a30 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running...

And my last try, when making deepstream-segmentation-test I get:


sudo make
cc -c -o deepstream_segmentation_app.o -I../../../includes -I /usr/local/cuda-11.4/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include deepstream_segmentation_app.c
In file included from deepstream_segmentation_app.c:30:0:
/usr/local/cuda-11.4/include/cuda_runtime_api.h:147:10: fatal error: crt/host_defines.h: No such file or directory
 #include "crt/host_defines.h"
          ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:63: recipe for target 'deepstream_segmentation_app.o' failed
make: *** [deepstream_segmentation_app.o] Error 1

Thanks for any help and guidance!!

D

ERROR from src_bin_muxer: Output width not set
Debug info: gstnvstreammux.c(2779): gst_nvstreammux_change_state (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstNvStreamMux:src_bin_muxer

Which GPU you are using? can you get nvidia-smi to run?

test1 app only accept H264 elementary stream.

Can you find this file?

Thanks for your reply!

The include problem solved by changing CUDA-VER to 11.6 in the Makefile BUT now getting cuda errors:

./deepstream-segmentation-app dstest_segmentation_config_semantic.txt sample_720p.mjpeg sample_720p.mjpeg
Now playing: sample_720p.mjpeg, sample_720p.mjpeg,

0:00:00.212244533 21668 0x55575f8b7270 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files

WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead

0:00:06.956276277 21668 0x55575f8b7270 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/david/nvidia/deepstream-6.0/samples/models/Segmentation/semantic/unetres18_v4_pruned0.65_800_data.uff_b2_gpu0_fp32.engine successfully

INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT data            3x512x512       
1   OUTPUT kFLOAT final_conv/BiasAdd 4x512x512       

0:00:06.987146207 21668 0x55575f8b7270 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest_segmentation_config_semantic.txt sucessfully
Running...
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)512, height=(int)512
Cuda failure: status=1 in cuResData at line 316
Cuda failure: status=1 in cuResData at line 348
Cuda failure: status=1 in cuResData at line 316
Cuda failure: status=1 in cuResData at line 348
ERROR: nvdsinfer_context_impl.cpp:341 Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: nvdsinfer_context_impl.cpp:1619 Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:07.256931948 21668 0x55575f84b590 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
Error details: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337
Cuda failure: status=700 in cuResData at line 337

Sure! nvidia-smi runs and I use all the time… The GPU is GTX1080ti

So which of the sample files is proper for the test?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Everything is just installed. As I said in the titly NEW Installation.

dGPU GTX1080ti
NVIDIA-Linux-x86_64-470.63.01.run
nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
deepstream-6.0_6.0.0-1_amd64.deb

Issue Type Nothing works!

How to reproduce the issue Follow the instructions here: https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu

I reisnstalled Ubuntu twice and followed the instructions twice and yet somehow I end up with a multitude of CUDA versions although the cuda directory does point to CUDA11.6

And then follow the text I wrote in my original post.

Is there paid support for Deepstream? Can I trust it for an enterprise client’s project?

Sorry for the late.
Did this still be an issue?

Yes! Two months without any interest on your part, after your previous involvement was to ask totally irrelevant questions !!!

This issue is still important to me so that we make a good technical decision, and I am willing to invest some time on this, But right now, because NVIDIA pretty much abandoned us, we are basing or project on Intel’s FPGA pipeline and we have been able to advance there quite well, with great ongoing support including free telephone support.

Deepstream support on the other hand has been non-existing.

Many thanks for your generosity!

Dave

Sorry for the bad experience to you, we will definitely improve our customer support in forum.
Looking forward to see you soon.

Thanks

It took you two weeks to say how sorry you are, which I know is just “lip service”. I don’t think you are truly sorry.

How about looking at the problem and providing a solution? All of the answers to this post have been disconnected to the problem.

DeepStream 6.0 is based on CUDA 11.4.1(Quickstart Guide — DeepStream 6.3 Release documentation), please reinstall your system according to the instruction.

JetPack 4.6.1 includes CUDA 10.2 meanining deepstream then is not compatible with the jetson platform!!!

But, in any case, I have a new install, and after followin these instruction, I have a soup of CUDA versions on my computer, ALL INSTALLED BY NVIDIA DRIVERS AND TOOLS

nvidia-smi reports CUDA 11.4

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:65:00.0 On | N/A | | 0% 44C P8 41W / 350W | 1438MiB / 24260MiB | 2% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

The NVIDIA SDK Manager V 1.7.3.9053 however, installed TODAY CUDA 9.1 without asking, and in

/usr/local I have cuda, cuda-10, and cuda-10.2 and I have no idea how they got there.

How do I solve that???

Your suggestion is disconnected to the problem described, as is every other comment made by the moderators. Not a single comment!

Just curious, why don’t you use docker images provided for deepstream?

They have everything installed in it. All you need on your host machine is the nvidia driver (which I guess you’ve already installed)
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html

Because I need to integrate with other frameworks like ROS and several intel depth cameras directly, and could not using the docker images

What platform are you working on? dGPU or Jetson? The deepstream relied cuda version for these two platforms are not the same.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.