DeepLearning Real-time Detection on TX2

Hi All,I’m new to DeepLearning and now I have one task to choose a Deeplearning model running on TX2 which could conduct real-time detection.

I had tested the yolo v2 based on darknet and the result was only 5FPS, and tiny yolo had better performance with 15FPS, but presently I need one model whose detection speed could reach as much as 25FPS.

I have been told that mobilenet based on tensorflow had real-time detection speed on TX2.Does anyone here has worked with in this field? And what’s your model and platform? Someone referred the mobilenet, shufflenet resnet and tensorflow caffe to me. Have you ever tried them and happened to know more about them? Or do you have better advice and recommondations?

As for me, speed is the prior thing.

Tks.

Hi,

1. It’s recommended to learn DL from jeston_inference:

Object detection sample can reach around 10 fps with DetectNet.

2. We also test the mobilenet with object detection API in TensorFlow.
The required where op run slowly on GPU and only have 5fps on Jetson.

We are working with Google team for a solution about this.
Will update information for you later.

Thanks.

Hi Franky,

we are also looking for a fast object detector to run on the Jetson. We are trying to optimise yolov2 with TensorRT but we did not succeed yet. We will probably try with different networks soon.

How did you get 5fps? Did you run the directly the code from darknet?
I am curious to know any development on this.

Best

Hi,

Sorry that we don’t know the environment of Franky_029.
Maybe he can share some information with us.

Here is some experiment for YOLO from other users:
https://devtalk.nvidia.com/default/topic/1027819/jetson-tx2/object-detection-performance-jetson-tx2-slower-than-expected/post/5227983/#5227983

  • Darknet with Tiny-Yolo: 17.5 Fps

Hope this helps.
Thanks.

Hi am2266 & AastaLLL,

Sorry for delaying your reply.

First I just ran the directly the code from darknet with open GPU/CUDNN/OPENCV in Makefile,and config the TX2 environment like this:

sudo nvpmodel -m 0
sudo ./jetson_clocks.sh

I got 5FPS performace with YOLO v2 cfg and almost 15FPS with tiny yolo cfg.
And then I found some source code on github, the url is:

https://github.com/AlexeyAB/darknet

Thanks AlexeyAB… I had test this code and got 25FPS with tiny yolo cfg.

I think maybe TensorRT can improve the performance ,but I have no idea.

Appreciate for your suggestion.
Tks.

Hi,

Thanks for sharing information with us.

Currently, TensorRT supports Caffe and TensorFlow model.
Once converting your model into Caffe format, you can follow our document and sample to launch it with TensorRT.
Doc: http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#caffeworkflow
Sample: https://github.com/dusty-nv/jetson-inference

Thanks.

Hi!
I’m working on running yolo on Tx2 Dev kit. I have installed all of requirements for yolo, OpenCV, CUDA, Cudnn.
I’ve tested if the onboard camera’s working via gstreamer : gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! xvimagesink
It works properly.
Then, I tried to run yolo by your code ./darknet detector demo cfg/coco.data cfg/yolo.cfg yolo.weights “nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”
The result is “Video-stream stopped!”
How could i track the issue?
Thank