Question 1:
In what framework was this model trained before conversion to onnx? (It seems to have been learned in TLT as expected.)
Question 2:
If the learning framework is Pytorch, you need to use the torch.onnx.export function to convert it to onnx.
Before
model = network class
I know I need to create a model object from it.
I would like to know the network definition for creating this model object.
In case of non-Patorch, can you tell me how to learn the model as a custom dataset and the code to convert it to onnx?
TLT model can only be stored in the .etlt format or converted into TensorRT engine with tlt-export.
We don’t have a converter to parse it into ONNX format.
However, the model can be deserialized into TensorRT directly.
So you can still use it within MMAPI by adding a deserializer.
For other frameworks, since ONNX is popular intermediate format.
You can almost find a corresponding parser or tutorial for the frameworks you want to use.
Resnet10_dynamic_batch.onnx provided as a sample
What learning framework (tensorflow, pytorch, etc.) did the original model learn before converting to onnx and converted to onnx?
Hi,
The sample is converted from resnet10.caffemodel. We don’t have public tool for doing it. You may search online to see if there is tool from community. Since the file is for reference only, we suggest convert your model to replace resnet10_dynamic_batch.onnx.
The model is trained with a private database.
But you should be able to find some similar dataset online.
The model is also trained with a internal tool which similar to the DIGITs.
However, it’s recommended to move to other frameworks since caffe support is much limited now.
For a similar architecture, you can check our Transfer Learning Toolkit.
The output model has its own format (.etlt) and is supported by the deepstream or TensorRT.