How to deploy Yolov5 on Nvidia Triton via Jetson Xavier NX

I am unable to do inferencing on Triton Server via Jetson Xavier NX.

  1. What command you’re using to start triton Container?
    I am not using triton ngc container as it is giving error - GPU not detected install Nvidia Container Toolkit. When I try to install nvidia container toolkit on jetson than again I encountered an error. Thatswhy I am using triton natively on Jetson.
    link - Releases · triton-inference-server/server · GitHub

Getting error command not found while running this command

tritonserver --model-repository=/path/to/model_repo --backend-directory=/path/to/tritonserver/backends \
         --backend-config=tensorflow,version=2
  1. Commands used thereafter:

sudo LD_LIBRARY_PATH=$HOME/tritonserver/lib ./yolov5s.onnx -m system -v -r $(pwd)/trtis_model_repo_sample_1 -t 6 -s false -p $HOME/tritonserver

Using this command to do dynamic batching as it was given in the README.md file of the (https://github.com/triton-inference-server/server/releases/download/v2.14.0/tritonserver2.14.0-jetpack4.6.tgz)

  1. Command with error and stack trace:
    command - make
    error - g++ -I…/…/server -I/usr/include/opencv4 -I…/…/core/include/ -I/usr/local/cuda/targets/aarch64-linux/include -I/home/npci-nx1/tritonserver/include/tritonserver -D TRITON_ENABLE_GPU=ON -D TRITON_MIN_COMPUTE_CAPABILITY=5.3 -c -g -o yolov5s.o yolov5s.onnx
    g++: warning: yolov5s.onnx: linker input file unused because linking not done
    g++ yolov5s.o -L/home/npci-nx1/tritonserver/lib -L/usr/lib -L/usr/local/cuda/targets/aarch64-linux/lib -lpthread -ltritonserver -lopencv_core -lopencv_highgui -lopencv_imgproc -lopencv_imgcodecs -lopencv_dnn -lcudart -o yolov5s
    g++: error: yolov5s.o: No such file or directory
    Makefile:42: recipe for target ‘yolov5s’ failed
    make: *** [yolov5s] Error 1

  2. What you are trying to accomplish?
    I am trying to do inferencing of on triton server. I have converted my model to onxx and placed it inside the models directory along with config.pbtxt and labels.txt. After that I am trying to run make command but its giving me an error as mentioned above. I simply follow the steps available in README.md file in which peoplenet model example is given. There are also peoplenet.cc and peoplenet.o file available in the directory which I don’t have in the case of YOLO.

  3. Hardware :

a. GPU: 384 core NVIDIA Volta GPU with 48 Tensor Cores
b. HW config, CPU, RAM : 6 core NVIDIA Carmel ARM v8.2 64-bit CPU, 8GB RAM

  1. Software:

a. DNN framework: pytorch
b. What do you want to do with yolov5?
Detect number of vehicles in the given frame.
c. The current state of the model? h5? ONNX? Tensorrt?
I have both yolov5s.onnx and yolov5s.trt

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Do you meet any errors when deploying the YoloV5 model with Triton?
Does the densenet_onnx work for you?

Thanks.