no detection with yolo for deepstream-app

Hi,

I’m working on real time detection in Xavier by using yolov3 network.
I would like to detect in multi sources so I am planning to use “deepstream app” not the “deepstream-yolo-app”.
I followed installation “https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps” and also the patches instructed in “https://devtalk.nvidia.com/default/topic/1047633/deepstream-sdk-on-jetson/yolo-for-deepstream-app/” forum.

Finally running yolo network with deepstream-app by the sample on deepstream_reference_apps but nothing detects…
it works well on tensor-rt and deepstream-yolo-app but just not in deepstream-app.
I don’t know why it is not working, the video just streams and detects nothing.
Do you know how to figure out this problem?

fyi1:deepstream_app_config_yoloV3.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file:///home/nvidia/deepstream_reference_apps/yolo/video/1.mp4
gpu-id=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=-1
## Set muxer output width and height
width=1280
height=720
cuda-memory-type=1

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[yoloplugin]
enable=1
gpu-id=0
unique-id=15
processing-width=640
processing-height=480
full-frame=1
config-file-path=/home/nvidia/deepstream_reference_apps/yolo/config/yolov3.txt

fyi2:config/yolov3.txt

--network_type=yolov3
--config_file_path=data/yolov3-obj.cfg
--wts_file_path=data/yolov3-obj_14000.weights
--labels_file_path=data/labels.txt

#Optional config params
# precision : Inference precision of the network
# calibration_table_path : Path to pre-generated calibration table. If flag is not set, a new calib table <network-type>-<precision>-calibration.table will be generated
# engine_file_path : Path to pre-generated engine(PLAN) file. If flag is not set, a new engine <network-type>-<precision>-<batch-size>.engine will be generated
# input_blob_name : Name of the input layer in the tensorRT engine file. Default value is 'data'
# print_perf_info : Print performance info on the console. Default value is false
# print_detection_info : Print detection info on the console. Default value is false
# calibration_images : Text file containing absolute paths of calibration images. Flag required if precision is kINT8 and there is no pre-generated calibration table
# prob_thresh : Probability threshold for detected objects. Default value is 0.5
# nms_thresh : IOU threshold for bounding box candidates. Default value is 0.5

#Uncomment the lines below to use a specific config param
--precision=kFLOAT
--calibration_table_path=/home/nvidia/deepstream_reference_apps/yolo/data/calibration/yolov3-calibration.table
--engine_file_path=/home/nvidia/deepstream_reference_apps/yolo/data/yolov3-obj_14000-kFLOAT-kGPU-batch1.engine
--print_prediction_info=true
--print_perf_info=true

### Config params trt-yolo-app only

# test_images : [REQUIRED] Text file containing absolute paths of all the images to be used for inference. Default value is data/test_images.txt.
# batch_size : Set batch size for inference engine. Default value is 1.
# view_detections : Flag to view images overlayed with objects detected. Default value is false.
# save_detections : Flag to save images overlayed with objects detected. Default value is true.
# save_detections_path : Path where the images overlayed with bounding boxes are to be saved. Required param if save_detections is set to true.
# decode : Decode the detections. This can be set to false if benchmarking network for throughput only. Default value is true.
# seed : Seed for the random number generator. Default value is std::time(0)

#Uncomment the lines below to use a specific config param
#--test_images=data/test_images.txt
#--batch_size=4
#--do_benchmark=true
#--view_detections=true
#--save_detections=true
#--save_detections_path=data/detections/
#--decode=false
#--seed
#--shuffle_test_set=false

Hi,

Have you applied this patch?
https://devtalk.nvidia.com/default/topic/1047633/deepstream-sdk-on-jetson/yolo-for-deepstream-app/post/5317972/#5317972

You should get a basic result with that patch.

By the way, this patch is hack-type implementation.
We are working on a official update in the deepstream_reference_apps which will be released in the near future.

Thanks.

@AastaLLL

Hi, thanks for reply, but as I already mentioned,
I applied those patches already and doesn’t detects anything.
When I execute

~/deepstream_reference_apps/yolo/samples/objectDetector_YoloV3$ 
deepstream-app -c deepstream_app_config_yoloV3.txt

on command, then following results comes up and video streams, but no detections on stream.

** WARN: <parse_tiled_display:1018>: Unknown key 'gpu-id' for group [tiled-display]
** WARN: <parse_source:359>: Unknown key 'gpu-id' for group [source0]
** WARN: <parse_streammux:418>: Unknown key 'gpu-id' for group [streammux]
** WARN: <parse_streammux:418>: Unknown key 'cuda-memory-type' for group [streammux]
** WARN: <parse_sink:962>: Unknown key 'gpu-id' for group [sink0]
** WARN: <parse_osd:599>: Unknown key 'gpu-id' for group [osd]

Using winsys: x11 

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

	z<row-index(0-9)><column-index(0-9)>: Expand a source from the 2D tile array
	u: Go back to 2D tile array

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:98>: Pipeline ready

NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
Allocating new output: 1280x720 (x 12), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3528: Send OMX_EventPortSettingsChanged: nFrameWidth = 1280, nFrameHeight = 720 
** INFO: <bus_callback:84>: Pipeline running

nvstreamtiler: batchSize set as 1

**PERF: FPS 0 (Avg)	
**PERF: 74.87 (74.87)	
**PERF: 30.00 (43.76)	
**PERF: 30.01 (37.90)		
...
**PERF: 30.00 (32.98)
	
** INFO: <bus_callback:121>: Received EOS. Exiting ...

Quitting
App run successful

I installed my Xavier by Jetpack4.1.1 and deepstream version 3.0 was installed.

Hi,

Is all modification listed in comment #1?
If yes, the patch you applied looks incomplete. Please apply all the patch included in that comment.
It requires not only the update in deepstream_reference_apps but deepstream-app.

Another alternative is to wait for our official release.

Thanks.

@AastaLLL

Hi,

No, I had modified all the files which is written on patches.
I found 2 patch files on https://devtalk.nvidia.com/default/topic/1047633/deepstream-sdk-on-jetson/yolo-for-deepstream-app/post/5317972/#5317972 and applied manually.
All the files in deepstream-app folder were modified as it is written on patches.
Should I had to apply the patches automatically? somelike using command?

+) when would be another alternative release in officially? 2nd quarter of 2019 as nano deepstream release??

It’s coming, please stay tuned to our announcements in the coming month.

Thanks

Hi,

We don’t push it as a patch since it was not an official release.
So it requires user to manually update the source.

It’s recommended to wait for our official support since it goes through fully testing and verification.

Thanks