How to run RTP Camera in deepstream on Nano

I tried with your config file in “deepstream-app” but I receive this error:

Creating LL OSD context new
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
ERROR from tiled_display_tiler: GstNvTiler: FATAL ERROR; NvTiler::Composite failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtiler/gstnvtiler.cpp(665): gst_nvmultistreamtiler_transform (): /GstPipeline:pipeline/GstBin:tiled_display_bin/GstNvMultiStreamTiler:tiled_display_tiler
Quitting
0:00:23.081478570  9311     0x35f25190 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: Internal data stream error.
0:00:23.081561518  9311     0x35f25190 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: streaming stopped, reason error (-5)
ERROR from primary_gie_classifier: Internal data stream error.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
streaming stopped, reason error (-5)
App run failed

Hi DaneLLL,

Thank you for the great support you bring to this forum.
I’m asking you to please flag this issue as a major one.
We’ve worked for almost 8 months with DS3.0EA and Jetson AGX and had a great experience with it.
We were waiting DS4.0 for getting into production.
The problem is that the standard DS4 pipeline using new v4l2decoder → muxer → inference → tiler → display fails with ip cameras from major video manufacturers (eg: HikVision) due to tiler crash, while it was working perfectly on DS3.0EA.
We understand that this was introduced with the big architecture changes in DS4.0 (merged sdk for Tegra and dGPU).
Currently, from the forum users experience and ours:

  • Ip cameras can be used with DS4 without the tiler and provided width and height in streammux are set to the native resolution of the camera.
  • The tiler works with video files, but seldom works with a few ip cameras, although it is a critical component for multi video stream and IVA.
    The problem is serious enough for us to stop any deployment until this issue is addressed.
    We are looking forward to getting feedback from the nvidia dev team to unlock the situation.
1 Like

Hi,
We have taken this as a priority issue. Will make an instant update once we have any finding.

Thank you for your reactivity. We’ll be waiting for this update.

Hi Guys,

Thanks for all your help. I have been trying to run deepstream-app with 2 RTSP streams using HIKVISION cameras on Jetson Nano. Finally, I have been able to get the app working by following the guidance mentioned in this thread. However, I have the following queries :

  1. Why does tracker not work with RTSP but works with file based input? Could someone please help me understand what could be the reason behind this?

  2. Why does tiled display give issues with RTSP streams? Since tiled display just depends on the output it should be agnostic to what the source is according to me. Please correct me where I am going wrong in my understanding.

  3. When I tried to run yolov3 (FP-16), I could only manage < 1 fps per camera in this case. Are these figures indicative of the maximum potential?

  4. In order to run yolov3 using 2 RTSP streams on Nano (FP-16), I changed “deepstream_app_config_yoloV3.txt” and “config_infer_primary_yoloV3.txt” in “sources/objectDetector_Yolo” to the following:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:edge1234@192.168.0.201:554/Streaming/Channels/1
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0
source-id=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:edge1234@192.168.0.202:554/Streaming/Channels/1
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0
source-id=1

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
#interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt

[tests]
file-loop=0
[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=/home/edgetensor/deepstream_sdk_v4.0_jetson/sources/objectDetector_Yolo/yolov3.cfg
model-file=/home/edgetensor/deepstream_sdk_v4.0_jetson/sources/objectDetector_Yolo/yolov3.weights
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt5.1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=80
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

Command used to run:

deepstream-app -c 'deepstream_app_config_yoloV3_nano_rtsp.txt'

In regard to the above, I have the following queries:

a) Since I do not see any flag in the config settings for tracker, I am assuming that tracker is not enabled by default. Is that the right understanding?

b) Are the configurations right in general for Jetson Nano? Since the fps achieved is reasonably low, I suspect that I might be doing something wrong. Kindly let me know if anything is wrong.

Thanks

1 Like

Hi,
We are checking the issue. Since we do not have the IP cameras you have, please help run attached app and share the print:

bufferformat: NvBufferColorFormat_NV12

Please modify rtsp location in the code and execute the steps:

$ export MMAPI_INCLUDE=/usr/src/tegra_multimedia_api/include
$ export MMAPI_CLASS=/usr/src/tegra_multimedia_api/samples/common/classes
$ export USR_LIB=/usr/lib/aarch64-linux-gnu
$ g++ -Wall -std=c++11  decode.cpp -o decode $(pkg-config --cflags --libs gstreamer-app-1.0) -I$MMAPI_INCLUDE $USR_LIB/tegra/libnvbuf_utils.so $MMAPI_CLASS/NvEglRenderer.o $MMAPI_CLASS/NvElement.o $MMAPI_CLASS/NvElementProfiler.o $MMAPI_CLASS/NvLogging.o $USR_LIB/libEGL.so $USR_LIB/libGLESv2.so $USR_LIB/libX11.so
$ export DISPLAY=:1(or 0)
$ ./decode

decode.zip (1.47 KB)

1 Like

Hi DaneLLL,

Here are the results:

For Hikvision and Avigilon ip cameras crashing with tiler:
bufferformat: NvBufferColorFormat_NV12_709_ER

For Hikvision cameras working properly with tiler:
bufferformat: NvBufferColorFormat_NV12

It seems that you’re on the right track !

Hi DaneLLL,

I executed the commands. I got the following output:

bufferformat: NvBufferColorFormat_NV12_709_ER

Thanks

Hello and thanks for the good work!

The Trendnet TV-IP314PI also shows:

bufferformat: NvBufferColorFormat_NV12_709_ER

Hi andrea_vighi,
Are you able to share the model name of the HikVision and Dahua cameras?

Hi DaneLLL,

The model name of HikVision camera at my end is : DS-2CD202WF-I

1 Like

Hi,
Please apply attached prebuilt libs and try again.

/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so
/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0

R32_2_DS_4_0_PREBUILT_LIB.zip (4.4 MB)

Hi DaneLLL,

I am still getting the following:

bufferformat: NvBufferColorFormat_NV12_709_ER

Hi DaneLLL,

We’ve made tests of the prebuild with all the cameras we have and … problem solved !!
Tiler works now correctly with all kind of ip cameras we’ve tested, and with any nvstreammux input width and height.
Thanks a lot for your excellent work nvidia team !

2 Likes

Hi vdsx,

Could you please let us know what you did to solve the problem? It would help all of us tremendously.
What about tracker? Does it work too?

Thanks

Hi neophyte1,
Please refer to this post

It is a config file for multiple RTSP sources. Please check if your IP cameras can run with the config file(need t modify uri to your sources). If it fails to run, please apply the two prebuilt libs and try again.

1 Like

Hi neophyte1,

To apply the prebuild libs, I’ve replaced the original DS4.0 files with the ones from the zip:
unzip files and
sudo cp libgstnvegltransform.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so
sudo cp libnvbufsurftransform.so /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0

Thanks a lot DaneLLL and vdsx for the help. I am now able to get tracker to work although tiler does not work very well. The perf fps drops when I use tiler compared to disabling it.

I also got deepstream-test3-app to work thanks to the help. However, this app does not have tracker and I wish to use tracker along with detector. Could you guide me how to go about adding tracker to this app? I have a preference for thsi app since it is easy to follow and will help me build the entire system that I am planning to build.

DaneLLL-

Trendnet TV-IP314PI now working with your new and improved so libs.

Thanks!

Hi neophyte1,
For clearness, you may start a new post. We would like to keep this for failure of running certain IP cameras.