Jetson TX2 - DIGITS workflow

Hello all,

I am a new Jetson TX2 user just installed Jetpack 3.2 and DeepStream SDK 1.5 on host computer and target TX2, all compiled fine and start running visionwork samples and DeepStream samples.
My configuration is :

Jetson TX2 target device

Host computer :
Ubuntu 16.04 LTS
Intel Core i5-2400 3.10Ghz 8mb ram Graphic Intel on board system.

I am following the document GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

I would like to make some test about a neural-network for ai veichle detection from real time camera streaming.

I have tested : nvgstiva-app -c /home/nvidia/DeepStream-Samples/configs/PGIE-FP16-CarType.txt but didn’t get much detection-tracking results on demo parking video

  1. First question is possible to download some pretrained networks for veichle detection to start making some test using Tx2-TenosorRT and nvgstiva-app ?

  2. Just to make some evelutation testing before investing on dedicated host machine with gpu graphic or cloud system it’s possible to install TenosrFlow and Digits on my host machine ?

Otherwise if it’s not possile, It’s possible to intall TensorFlow and Digits directly on TX2 ?

Thanks in advance for any suggestion

Regards
Marco Gonnelli

The L4T Multimedia SDK (which is installed to Jetson by JetPack) includes a vehicle detection model (and demo) which works pretty well.

You can install TensorFlow which will run on CPU, it will be much slower to train complex models but possible to try the framework. DIGITS I believe requires a GPU for.

DIGITS is not supported on Jetson., but you can install TensorFlow natively onto Jetson TX2, see this post: https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-8-wheel-with-jetpack-3-2-/

Thnxxx for reply
I tested also backend application of multimedia API as u suggested with sample movie but don’t see much accurancy on veichle detection.
Have tried Yolov3 network on Jetson and have a very good accurancy on veichle detection but runs very slow about 3/4 fps.
I suppose that’s because yolov3 not using TensorRT on TX2.
I will try to convert Yolov3 network to caffemodel which is supported by TensorRT, right ?
Any suggestion will be appreciated.

Thnxxs so much
Marco

Hi,

We have caffe-to-TensorRT parser on Jetson.

We are not sure if each layer of YOLO3 is supported by TensorRT and parser.
Please check it with the information here:
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

If there is a layer doesn’t support by TensorRT yet, you can write your implement with our plugin API.
Here is an example for your reference: [url]https://github.com/AastaNV/Face-Recognition[/url]

Thanks.