Hi there, I finally managed to train a network in tensorflow and I want to use it in the nano. I know how to use a pretained one and a the ones trained on the nano with pytorch. But I don’t know how to call the network that I exported from my pc into a python script in the nano. It was a ssd mobilenet.Please guide me.
You can install TensorFlow for Nano with the instructions below:
After installation, you can use the same inference python script on Nano.
Thanks for your response, I fallowed this repo to train networks in my PC using a 2070 super How to Train Your Own Object Detector Using TensorFlow Object Detection API - neptune.ai and then y exported it as showed here TensorFlow Object Detection API: Best Practices to Training, Evaluation & Deployment - neptune.ai this gonverted a pb file to onnx. I hoped to be able to use the exported onnx file as I use the ones exported by the nano here jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub. But the onnx file from my pc do not run in my nano. I am new at programming and AI so there is a lot that escapes my comprehension. Could you recomend me a way to get a file from my pc runing in my nano. Prease provideme with some code example that I can write in the nano script to get it working.
For example tu run the jetson onnx I write net2=jetson.inference.detectNet(‘ssd-mobilenet-v2’,[’–model=/media/kc/1.0T/4th/jetson-inference/python/training/detection/ssd/models/fruit2/ssd-mobilenet.onnx’,’–input-blob=input_0’,’–labels=/media/kc/1.0T/4th/jetson-inference/python/training/detection/ssd/models/fruit2/labels.txt’,’–output-cvg=scores’,’–output-bbox=boxes’,’–camera=/dev/video0’],threshold=.7)
. Thanks for your time and help.
jetson-inference use TensorRT as a backend inference engine.
You can deploy it with TensorRT to validate if it is workable with TensorRT or not.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.