Hi I want to deploy yolo v5 model on triton server. After converting the model to onxx format what steps I need to follow as in the documentation of triton peoplenet example is given but I want clarity how take make a build file for another model also there is a file peoplenet.cc file so how can I customize it for my use case.
Hi,
You can find an ONNXRuntime backend in the below folder.
Please update the model name and the corresponding input/output information to inference with Triton.
/opt/nvidia/deepstream/deepstream-6.0/samples/trtis_model_repo/densenet_onnx
Thanks.
How can I make build for my model before running triton?
Hi,
Please check if the warmup option can meet your requirement:
<!--
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
This file has been truncated. show original
Thanks.
system
Closed
February 1, 2022, 2:00am
10
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.