Access model before conversion to .tlt OR decode .tlt to .hdf5/.pb

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) Classification_tf2
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) v5.0.0
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I would like to access the model file before it is internally converted to .tlt file, or try to decode the .tlt file to .hdf5 or .pb. Can you tell me how to do that? Exporting the .tlt model to .etlt and then to ONNX only to convert it back to tflite for deployment is causing a lot of non-native ops to be introduced and sometimes even failure in conversion. I see that the code for this exists, but I am not sure how to use it.

You can try to run it by setting -m your.tlt , -o out.hdf5 and -k yourkey.

Which mode do I use for this?
!Tao deploy -m mytlt -o out.hdf5 -k mykey?

Please run the python file directly inside the docker nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf2.11.0

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.