Thanks for pointing out the issue in documentation. You can refer below link:
This file has been truncated.
# ONNX Version Converter
ONNX provides a library for converting ONNX models between different
opset versions. The primary motivation is to improve backwards compatibility of ONNX
models without having to strengthen the spec for ONNX backends. This
allows backend developers to offer support for a particular opset version
and for users to write or export models to a particular opset version but
run in an environment with a different opset version. Implementation wise, the library leverages the in-memory representation that is much more convenient to manipulate than the raw protobuf structs, and converters to and from the protobuf format which were developed for the ONNX Optimizer.
You may be interested in invoking the provided op-specific adapters, or in
implementing new ones (or both). Default adapters only work in the default
domain, but can be generalized to work cross-domain or utilizing new
conversion methods, dependent on the nature of relevant breaking changes.
## Invoking The Version Converter
The version converter may be invoked either via C++ or Python.
The Python API
is described, with example,
I think yes, you can import your Faster RCNN model with the current versions. Please let us know in case you face any issues.
That is also one option, please refer to below sample for your reference:
You can also use ONNX parser and implement custom layer for unsupported operations.
Please stay tuned to NVIDIA annoucements.