hkada
June 12, 2021, 7:59am
1
I built onnxruntime with python with using a command as below l4t-ml conatiner.
But I cannot use onnxruntime.InferenceSession.
(onnxruntime has no attribute InferenceSession)
I missed the build log, the log didn’t show any errors.
./build.sh --config Release --update --build --enable_pybind --build_wheel --use_cuda --use_tensorrt --cudnn_home /usr/lib/aarch64-linux-gnu/ --tensorrt_home /usr/lib/aarch64-linux-gnu/
Hi,
You can directly install our onnxruntime prebuilt for Jetson below:
https://elinux.org/Jetson_Zoo#ONNX_Runtime
Or the detailed building command can be found in the below page:
# Dockerfiles
**Execution Providers**
- CPU: [Dockerfile](Dockerfile.source), [Instructions](#cpu)
- CUDA/cuDNN: [Dockerfile](Dockerfile.cuda), [Instructions](#cuda)
- MIGraphX: [Dockerfile](Dockerfile.migraphx), [Instructions](#migraphx)
- ROCm: [Dockerfile](Dockerfile.rocm), [Instructions](#rocm)
- NUPHAR: [Dockerfile](Dockerfile.nuphar), [Instructions](#nuphar)
- OpenVINO: [Dockerfile](Dockerfile.openvino), [Instructions](#openvino)
- TensorRT: [Dockerfile](Dockerfile.tensorrt), [Instructions](#tensorrt)
- VitisAI: [Dockerfile](Dockerfile.vitisai)
**Platforms**
- ARM 32v7: [Dockerfile](Dockerfile.arm32v7), [Instructions](#arm-3264)
- ARM 64: [Dockerfile](Dockerfile.arm64), [Instructions](#arm-3264)
- NVIDIA Jetson TX1/TX2/Nano/Xavier: [Dockerfile](Dockerfile.jetson), [Instructions](#nvidia-jetson-tx1tx2nanoxavier)
**Other**
- ORT Training (torch-ort): [Dockerfiles](https://github.com/pytorch/ort/tree/main/docker)
- ONNX-Ecosystem (CPU + Converters): [Dockerfile](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/Dockerfile), [Instructions](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem)
This file has been truncated. show original
Thanks.
1 Like