I want to develop a single branch for which I have a trained ONNX model file, but I am unsure how to integrate it with the pipeline.
I have done it already for TAO trained models, where I was easily able to find config files for etlt files, which I just needed to modify according to my desire, but I cannot do the same for ONNX models. Hence I would like to know how one can use his custom ONNX model. I tried using BYOM converter, but some of the objects in my model are not supported by BYOM.
Can you elaborate what config do you want to apply?
Deepstream supports onnx model (by option onnx-file in the config file), can it work for your case?
Thank you for sharing the resource, but I mentioned it already, this model isn’t TAO trained. Thus I don’t have the model configs neither the model engine that are usually generated from TAO.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks