Hi, I trained ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8 with tensorflow object detection api 2 and export saved_model.pb. Can ı use this model on jetson xavier or nano?
Which inference frameworks do you prefer?
You can use TensorFlow directly on the XavierNX.
The package can be installed with the following instructions:
Or you can run it with TensorRT after you convert the model into ONNX format.
You can find an example for the TF object detection (TFOD) model below:
Thanks for reply. I tried to convert custom efficient-d0 model to tensorRT but I got this error.
Traceback (most recent call last):
File “create_onnx.py”, line 451, in
File “create_onnx.py”, line 427, in main
File “create_onnx.py”, line 341, in update_nms
anchors_tensor = extract_anchors_tensor(box_net_split)
File “create_onnx.py”, line 284, in extract_anchors_tensor
anchors = np.concatenate([anchors_y, anchors_x, anchors_h, anchors_w], axis=2)
File “<array_function internals>”, line 6, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 49104 and the array at index 2 has size 1
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
What kind of customization do you make?
Do you retrain the model with transfer learning? Or have modified the architecture?