Steps from comment #6 worked for me with JetPack 3.3 and DeepStream 1.5 on the TX2. I am attempting to do https://github.com/vat-nvidia/deepstream-plugins on my TX2, but running into a few make errors. This Github states to use Jetpack 3.3 and DeepStream 1.5 which is interesting. Hopefully I can get around the make errors.
I do have the YOLO deepstream plugin working on Jetson TX2.
It’s true that the first version of the plugin was only for Tesla, but a later update to the gitHub was made, which supports Jetson (Tegra) too.
Is Nvidia still working on creating a unified DeepStream version that also works on the TX2? I would love to be able to develop and test a Deepstream program on my desktop and then deploy it to the TX2. I also need the ability to connect to multiple streams (MIPI cameras), which the current version for the TX2 does not support. Also, the Transfer Learning Toolkit seems especially useful in resource-constrained embedded use cases; would be a waste to just use that for server-side solutions.
And another question: will it be possible to use the models on the tf_trt github in Deepstream?
Nvdia is working toward unified deepstream, but no timeline to releasse.
BTW, tegra and tesla has many HW design difference, so it is hard to cover two platform totally.
We are also exposing some interfaces to let customer implement unsupported layer in gst-nvinfer plugin.
g++ -I"usr/include/aarch64-linux-gnu" -I"/usr/local/cuda-9.0/include" -I "/usr/include" -c -o build/yolov3-tiny.o -O2 -std=c++11 -lstdc++fs -ldl -fPIC -Wall -Wunused-function -Wunused-variable -Wfatal-errors -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include yolov3-tiny.cpp
In file included from ds_image.h:28:0,
from calibrator.h:29,
from yolo.h:29,
from yolov3-tiny.h:29,
from yolov3-tiny.cpp:26:
trt_utils.h:83:22: error: 'nvinfer1::DimsHW YoloTinyMaxpoolPaddingFormula::compute(nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, const char*) const' marked 'override', but does not override
nvinfer1::DimsHW compute(nvinfer1::DimsHW inputDims, nvinfer1::DimsHW kernelSize,
^
compilation terminated due to -Wfatal-errors.
Makefile:85: recipe for target 'build/yolov3-tiny.o' failed
make[1]: *** [build/yolov3-tiny.o] Error 1
make[1]: Leaving directory '/home/nvidia/liuhang/deepstream-plugins/sources/lib'
Makefile:75: recipe for target 'deps' failed
make: *** [deps] Error 2
actually, I already updated mu jetpack from 3.2 to 3.3, and I also check the version of tensorrt:
ii libnvinfer-dev 4.1.3-1+cuda9.0 arm64 TensorRT development libraries and headers
ii libnvinfer-samples 4.1.3-1+cuda9.0 arm64 TensorRT samples and documentation
ii libnvinfer4 4.1.3-1+cuda9.0 arm64 TensorRT runtime libraries
ii tensorrt 4.0.2.0-1+cuda9.0 arm64 Meta package of TensorRT