Hi, this is a follow-up to my questions in this topic.
I have a TF2 object detection model, fine-tuned to recognize a custom object, and I need to deploy it on Jetson NANO.
After copying the model to the NANO, I tried to convert it to TensorRT by using:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
def main():
input_saved_model_dir = 'my_custom_model_path/saved_model'
output_saved_model_dir = 'trt_models/converted_model'
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)
if __name__ == '__main__':
main()
This script produced another saved model in the selected output path. I assume this can be run after installing the TF Object Detection API on the NANO? I’m trying to do that, but the process has been going on for the past 3 days, it seems to be struggling in solving dependencies.
Or, is there any other way to run it natively after the conversion, for example by using detectnet?
I tried using the tf2onnx repository, but when I try to convert my model I was getting the error:
ValueError: StridedSlice: attribute new_axis_mask not supported
As I mentioned, I copied the model to the Jetson Nano and ran the script I posted on the Nano, the other saved model was produced there.
Can I somehow use it now?
The usage is similar to the TensorFlow on the desktop environment.
Due to some layers are not supported by ONNX, please inference it with the TensorFlow interface.