Connect Jetson nano to Arduino

I have trained my model for object detection, and everything works well, it detects the objects with no problem, however, I would like to connect Arduino to Jetson Nano, so when it detects one of the objects, and the confidence level of the model is above 90% it will send the data to the Arduino and will turn on an LED.
I am working with the Jetson-inference library by dusty, cloned from GitHub, however I am finding zero documentation about how to make this sort of things even though I have seen around it is possible to do such thing.
I know how to send the data in order to turn on the LED from the Jetson-nano to the Arduino through Python, however I would like to make it possible when detects an object from my model.
Does anyone know how I can achieve this? or Anywhere where I can find a piece of documentation about this?

Hi @fratanto, you could communicate to the Arduino over serial port / UART. Here are a couple resources for doing that:

Also, Jetson Nano can drive LED directly through it’s GPIO. Here is a JetsonHacks article about it:

Thank you for this!

I did already manage to turn on an LED with the Jetson, however I am now trying to make it possible on my model, which I have already trained, but I would like to add this features, however there is no documentation on how doing so, any tip on which document or where I could create/add the code?

As I have tried to edit the detectnet.cpp file and the .py one as well, however it does not seem to be edited when I run the docker container,.

You can make a new directory on your Jetson (outside of container), put your edited Python script in it, and then mount the directory into the container. Then any changes you make will show up inside that mounted directory in the container. See this section of the tutorial for more info:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-example-python-2.md#setting-up-the-project

Thank you for this again!

One last question about how I can load it with my trained detection model(which is in the pyhton/training/detection/ssd directory), which I have trained using “train_ssd.py”, should I just specify the path in the code, when I declare the network or how should I do for making it use my model?

Also, if I would like to change the color for the detection boxes? Is it possible? I tried to edit the detectnet.cpp file, however when I build the docker, it shows me the original file, even though I saved it many times, what I would like to do is to edit the first class color to red, as it is by default green.

Hi @fratanto, sorry for the delay - after you export your model to ONNX, you can load it into detectnet/detectnet.py either via command-line or this is what the code would look like:

net = jetson.inference.detectNet(argv=[`--model=model_dir/ssd-mobilenet.onnx`, `--labels=model_dir/labels.txt`,  `--input-blob=input_0`, `--output-cvg=scores`, `--output-bbox=boxes`])

Lets continue following up on your other topic about this - https://forums.developer.nvidia.com/t/not-able-do-edit-the-file-detectnet-cpp/177271/18