Building a container with ONNXRuntime with TensorRT and PyTorch

Hello,

I am trying to bootstrap ONNXRuntime with TensorRT Execution Provider and PyTorch inside a docker container to serve some models.

After a ton of digging it looks like that I need to build the onnxruntime wheel myself to enable TensorRT support, so I do something like the following in my Dockerfile

FROM nvcr.io/nvidia/tensorrt:21.03-py3 as onnxruntime

ARG ONNXRUNTIME_REPO=https://github.com/Microsoft/onnxruntime
ARG ONNXRUNTIME_BRANCH=v1.7.2

RUN apt-get update &&\
    apt-get install -y sudo git bash unattended-upgrades

RUN unattended-upgrade

RUN python -m pip install --upgrade pip setuptools wheel

WORKDIR /code

ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:/code/cmake-3.14.3-Linux-x86_64/bin:/opt/miniconda/bin:${PATH}

# Prepare onnxruntime repository & build onnxruntime with TensorRT
RUN git clone --single-branch --branch ${ONNXRUNTIME_BRANCH} --recursive ${ONNXRUNTIME_REPO} onnxruntime &&\
    /bin/sh onnxruntime/dockerfiles/scripts/install_common_deps.sh &&\
    cd onnxruntime &&\
    /bin/sh ./build.sh --parallel --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --use_tensorrt --tensorrt_home /workspace/tensorrt --config Release --build_wheel --update --build --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER)

FROM nvcr.io/nvidia/pytorch:21.03-py3

RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
    apt-get update && apt-get install -y --no-install-recommends \
        build-essential \
        ca-certificates \
        curl && \
    rm -rf /var/lib/apt/lists/* && \
    useradd --create-home mrc

USER mrc

ENV PATH="$PATH:/home/mrc/.local/bin" \
    TORCH_HOME=/home/mrc/torch_models

COPY ./requirements/requirements.txt /tmp/requirements.txt

COPY --from=onnxruntime /code/onnxruntime/build/Linux/Release/dist/*.whl /tmp

# Upgrade pip, setuptools and wheel
# Install the requirements
# Download the punkt library from nltk.
RUN python -m pip install --upgrade pip setuptools wheel && \
    python -m pip install /tmp/*.whl && \
    python -m pip install -r /tmp/requirements.txt

# I then expose some ports and start my application with gunicorn

In the above, I essentially do a two stage build, where in the first stage I try to generate the python wheel with the TensorRT Provider, and in the second stage from NVIDIA’s PyTorch container I try to copy it and install inside the container.

However I run into the following issue

#16 4.192 ERROR: onnxruntime_gpu_tensorrt-1.7.2-cp37-cp37m-linux_x86_64.whl is not a supported wheel on this platform.

Both stages start with the same NVIDIA versioned base containers, and contain the same Python, nvcc, OS, etc. Note, that I am using NVIDIA’s 21.03 containers, but the same issue persists on the 20.12 containers as well (which is the version used by the Dockerfile.tensorrt example in the onnxruntime repository)

I also notice that the compiled wheel is for cp37, while the host OS where I compiled the wheel has Python 3.8. Is this difference causing the issue? Any pointers on what I could be doing wrong here?

Thank you!

(I would also appreciate any advice if there is an easier way to accomplish the above, I feel like the multi-stage Docker build is a bit of an overkill for my task…)

Ok, I eventually found out why this was not working.

In the first stage, the following script line

/bin/sh onnxruntime/dockerfiles/scripts/install_common_deps.sh &&\

was overwriting the default Python installation from 3.8 to 3.7, thus generating a wheel that was not suitable for use in the second stage. I basically removed the script and did some parts manually in my docker image to get it fully working.

Here’s the final Dockerfile that works.

FROM nvcr.io/nvidia/tensorrt:21.03-py3 as onnxruntime

ARG ONNXRUNTIME_REPO=https://github.com/Microsoft/onnxruntime
ARG ONNXRUNTIME_BRANCH=master

RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
    apt-get update &&\
    DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        sudo \
        git \
        bash \
        wget \
        zip \
        ca-certificates \
        build-essential \
        curl \
        libcurl4-openssl-dev \
        libssl-dev

WORKDIR /code

ENV PATH /code/cmake-3.14.3-Linux-x86_64/bin:/opt/miniconda/bin:${PATH}

RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh --no-check-certificate &&\
    /bin/bash ~/miniconda.sh -b -p /opt/miniconda &&\
    rm ~/miniconda.sh &&\
    /opt/miniconda/bin/conda clean -ya

RUN pip install --upgrade pip numpy &&\
    rm -rf /opt/miniconda/pkgs

RUN wget --quiet https://github.com/Kitware/CMake/releases/download/v3.14.3/cmake-3.14.3-Linux-x86_64.tar.gz &&\
    tar zxf cmake-3.14.3-Linux-x86_64.tar.gz &&\
    rm -rf cmake-3.14.3-Linux-x86_64.tar.gz

# Prepare onnxruntime repository & build onnxruntime with TensorRT
RUN git clone --single-branch --branch ${ONNXRUNTIME_BRANCH} --recursive ${ONNXRUNTIME_REPO} onnxruntime &&\
    cd onnxruntime &&\
    /bin/sh ./build.sh --parallel --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --use_tensorrt --tensorrt_home /workspace/tensorrt --config Release --build_wheel --update --build --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER) &&\
    pip install /code/onnxruntime/build/Linux/Release/dist/*.whl

FROM nvcr.io/nvidia/pytorch:21.03-py3

RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
    apt-get update && apt-get install -y --no-install-recommends \
        build-essential \
        ca-certificates \
        curl &&\
    rm -rf /var/lib/apt/lists/* &&\
    useradd --create-home mrc

USER mrc

ENV PATH="$PATH:/home/mrc/.local/bin" \
    TORCH_HOME=/home/mrc/torch_models

COPY ./requirements/requirements.txt /tmp/requirements.txt

COPY --from=onnxruntime /code/onnxruntime/build/Linux/Release/dist/*.whl /tmp

# Upgrade pip, setuptools and wheel
# Install the requirements
# Download the punkt library from nltk.
RUN python -m pip install --upgrade pip setuptools wheel && \
    python -m pip install /tmp/*.whl && \
    python -m pip install -r /tmp/requirements.txt

# ... Expose the ports and run the app
3 Likes