Is it possible to run Deepstream on GTX Titan? Or it has to be Tesla?
When I run with configuration file “source30_720p_dec_infer-resnet_tiled_display_int8.txt”, I got error:
** WARN: <parse_streammux:474>: Unknown key 'config-file' for group [streammux] ** ERROR: <create_multi_source_bin:710>: Failed to create element 'src_bin_muxer' ** ERROR: <create_multi_source_bin:772>: create_multi_source_bin failed ** ERROR: <create_pipeline:790>: create_pipeline failed ** ERROR: <main:588>: Failed to create pipeline Quitting App run failed
I change the sink0 to type 1, but it didn’t change anything. Is there any part that I have missed? I downloaded opengl too, and driver should installed properly as command
will show the GPU I am using.
What is the architecture of your GTX TITAN ?
Kepler,Maxwell or Pascal?
We have been testing with GTX 1050 , RTX TITAN and RTX 2080 and it works fine.
Try Clearing the Cache by executing the command sudo rm -r ~/.cache/gstreamer-1.0/registry*
It is a Titan X, so should be Maxwell?
Also, i have tried to clean the cache, but same error
Deepstream can work on GeForce cards but we don’t do any validation on any consumer cards. So you can use it but it will be on your own. Alternate is you can either upgrade to T4 or use Jetson dev kits. With a unified sdk in 4.0, you can port between platforms easily.
Can you show your config file?
# Copyright (c) 2018 NVIDIA Corporation. All rights reserved. # # NVIDIA Corporation and its licensors retain all intellectual property # and proprietary rights in and to this software, related documentation # and any modifications thereto. Any use, reproduction, disclosure or # distribution of this software and related documentation without an express # license agreement from NVIDIA Corporation is strictly prohibited. [application] enable-perf-measurement=1 perf-measurement-interval-sec=5 #gie-kitti-output-dir=streamscl [tiled-display] enable=1 rows=5 columns=6 width=1280 height=720 gpu-id=0 #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson #(5): nvbuf-mem-handle - Allocate Surface Handle memory, applicable for Jetson #(6): nvbuf-mem-system - Allocate Surface System memory, allocated using calloc nvbuf-memory-type=0 [source0] enable=1 #Type - 1=CameraV4L2 2=URI 3=MultiURI type=3 uri=file://../../streams/sample_720p.mp4 num-sources=15 #drop-frame-interval=2 gpu-id=0 # (0): memtype_device - Memory type Device # (1): memtype_pinned - Memory type Host Pinned # (2): memtype_unified - Memory type Unified cudadec-memtype=0 [source1] enable=1 #Type - 1=CameraV4L2 2=URI 3=MultiURI type=3 uri=file://../../streams/sample_720p.mp4 num-sources=15 gpu-id=0 # (0): memtype_device - Memory type Device # (1): memtype_pinned - Memory type Host Pinned # (2): memtype_unified - Memory type Unified cudadec-memtype=0 [sink0] enable=1 #Type - 1=FakeSink 2=EglSink 3=File type=3 sync=1 source-id=0 gpu-id=0 nvbuf-memory-type=0 [sink1] enable=0 type=3 #1=mp4 2=mkv container=1 #1=h264 2=h265 3=mpeg4 ## only SW mpeg4 is supported right now. codec=1 sync=0 #iframeinterval=10 bitrate=2000000 output-file=out.mp4 source-id=0 [sink2] enable=0 #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming type=4 #1=h264 2=h265 codec=1 sync=0 bitrate=4000000 # set below properties in case of RTSPStreaming rtsp-port=8554 udp-port=5400 [osd] enable=1 gpu-id=0 border-width=1 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Arial show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0 [streammux] gpu-id=0 ##Boolean property to inform muxer that sources are live live-source=0 batch-size=30 ##time out in usec, to wait after the first buffer is available ##to push the batch even if the complete batch is not formed batched-push-timeout=40000 ## Set muxer output width and height width=1280 height=720 ##Enable to maintain aspect ratio wrt source, and allow black borders, works ##along with width, height properties enable-padding=0 nvbuf-memory-type=0 config-file=config_mux_source30.txt # config-file property is mandatory for any gie section. # Other properties are optional and if set will override the properties set in # the infer config file. [primary-gie] enable=1 gpu-id=0 model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine #Required to display the PGIE labels, should be added even when using config-file #property labelfile-path=../../models/Primary_Detector/labels.txt batch-size=30 #Required by the app for OSD, not a plugin property bbox-border-color0=1;0;0;1 bbox-border-color1=0;1;1;1 bbox-border-color2=0;0;1;1 bbox-border-color3=0;1;0;1 interval=0 #Required by the app for SGIE, when used along with config-file property gie-unique-id=1 nvbuf-memory-type=0 config-file=config_infer_primary.txt [tests] file-loop=0
I tried different sink0 type, 1, 2, 3. But none of them work
sink0, The default type=2 to display, make sure you have screen connection
You can check deepstream 4.0 package source30_1080p_dec_infer-resnet_tiled_display_int8.txt
Thanks for your reply! This problem is solved. DeepStream doesn’t work on GTX Titan. After switching to RTX, the problem is solved.
Thanks for share.
Can you give more details about “DeepStream doesn’t work on GTX Titan” ? I think it should work.
More specifically I mean analytics server set up in sample test 4. An analytics server with GTX Titan can run all of the required docker containers, but the perception server will not be able to find the broker. Problem was solved after setting up the server with RTX machine.
Please also consider helping with my problem under the link: https://devtalk.nvidia.com/default/topic/1058327/deepstream-sdk/questions-regarding-test-application-4-/post/5369954/#5369954