Install tensorrt-inference-serving in Xavier, failed

I am trying to install tensorrt-inference-serving in Xavier, but failed with the following error:

In file included from /home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_utils.cc:27:0:
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: error: ‘TensorFormat’ is not a member of ‘nvinfer1’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: note: suggested alternative: ‘PluginFormat’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
                                           PluginFormat
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_utils.cc:49:30: error: redefinition of ‘nvidia::inferenceserver::MemoryFormat nvidia::inferenceserver::ConvertTrtFmtToFmt’
 ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt)
                              ^~~~~~~~~~~~
In file included from /home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_utils.cc:27:0:
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:14: note: ‘nvidia::inferenceserver::MemoryFormat nvidia::inferenceserver::ConvertTrtFmtToFmt’ previously defined here
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
              ^~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_utils.cc:49:30: error: ‘TensorFormat’ is not a member of ‘nvinfer1’
 ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt)
                              ^~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_utils.cc:49:30: note: suggested alternative: ‘PluginFormat’
 ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt)
                              ^~~~~~~~~~~~
                              PluginFormat
src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/build.make:182: recipe for target 'src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/plan_utils.cc.o' failed
make[6]: *** [src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/plan_utils.cc.o] Error 1
make[6]: *** Waiting for unfinished jobs....
In file included from /home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:31:0:
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: error: ‘TensorFormat’ is not a member of ‘nvinfer1’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: note: suggested alternative: ‘PluginFormat’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
                                           PluginFormat
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc: In member function ‘nvidia::inferenceserver::Status nvidia::inferenceserver::AutoFillPlanImpl::Init(nvidia::inferenceserver::ModelConfig*)’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:134:37: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getNbOptimizationProfiles’
             num_profiles = engine_->getNbOptimizationProfiles();
                                     ^~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:167:47: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getProfileDimensions’; did you mean ‘getBindingDimensions’?
           nvinfer1::Dims min_shape = engine_->getProfileDimensions(
                                               ^~~~~~~~~~~~~~~~~~~~
                                               getBindingDimensions
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:169:25: error: ‘nvinfer1::OptProfileSelector’ has not been declared
               nvinfer1::OptProfileSelector::kMIN);
                         ^~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:188:47: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getProfileDimensions’; did you mean ‘getBindingDimensions’?
           nvinfer1::Dims max_shape = engine_->getProfileDimensions(
                                               ^~~~~~~~~~~~~~~~~~~~
                                               getBindingDimensions
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:190:25: error: ‘nvinfer1::OptProfileSelector’ has not been declared
               nvinfer1::OptProfileSelector::kMAX);
                         ^~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc: In member function ‘void nvidia::inferenceserver::AutoFillPlanImpl::InitIOLists()’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/autofill.cc:362:43: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getNbOptimizationProfiles’
       engine_->getNbBindings() / engine_->getNbOptimizationProfiles();
                                           ^~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:33:0:
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: error: ‘TensorFormat’ is not a member of ‘nvinfer1’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/build/trtis/../../src/backends/tensorrt/plan_utils.h:53:43: note: suggested alternative: ‘PluginFormat’
 MemoryFormat ConvertTrtFmtToFmt(nvinfer1::TensorFormat trt_fmt);
                                           ^~~~~~~~~~~~
                                           PluginFormat
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc: In member function ‘void nvidia::inferenceserver::PlanBackend::Context::InitProfile()’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:188:39: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getNbOptimizationProfiles’
   const int total_profiles = engine_->getNbOptimizationProfiles();
                                       ^~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc: In member function ‘nvidia::inferenceserver::Status nvidia::inferenceserver::PlanBackend::CreateExecutionContext(const string&, int, const std::unordered_map<std::__cxx11::basic_string<char>, std::vector<char> >&, std::__cxx11::string)’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:272:29: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘setOptimizationProfile’
     if (!context->context_->setOptimizationProfile(profile_index)) {
                             ^~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:279:37: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getNbOptimizationProfiles’
                   context->engine_->getNbOptimizationProfiles() - 1));
                                     ^~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:321:27: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘allInputDimensionsSpecified’
   if (!context->context_->allInputDimensionsSpecified()) {
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc: In member function ‘nvidia::inferenceserver::Status nvidia::inferenceserver::PlanBackend::Context::InitializeInputBinding(const string&, nvidia::inferenceserver::DataType, const DimsList&, bool, bool)’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:468:50: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getBindingFormat’; did you mean ‘getBindingName’?
   MemoryFormat fmt = ConvertTrtFmtToFmt(engine_->getBindingFormat(index));
                                                  ^~~~~~~~~~~~~~~~
                                                  getBindingName
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:468:73: error: ‘nvidia::inferenceserver::ConvertTrtFmtToFmt’ cannot be used as a function
   MemoryFormat fmt = ConvertTrtFmtToFmt(engine_->getBindingFormat(index));
                                                                         ^
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:503:48: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getProfileDimensions’; did you mean ‘getBindingDimensions’?
     nvinfer1::Dims max_profile_dims = engine_->getProfileDimensions(
                                                ^~~~~~~~~~~~~~~~~~~~
                                                getBindingDimensions
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:504:42: error: ‘nvinfer1::OptProfileSelector’ has not been declared
         index, profile_index_, nvinfer1::OptProfileSelector::kMAX);
                                          ^~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:505:51: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getProfileDimensions’; did you mean ‘getBindingDimensions’?
     min_dims_[index - binding_offset_] = engine_->getProfileDimensions(
                                                   ^~~~~~~~~~~~~~~~~~~~
                                                   getBindingDimensions
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:506:42: error: ‘nvinfer1::OptProfileSelector’ has not been declared
         index, profile_index_, nvinfer1::OptProfileSelector::kMIN);
                                          ^~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:558:20: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘setBindingDimensions’
     if (!context_->setBindingDimensions(index, input_dim)) {
                    ^~~~~~~~~~~~~~~~~~~~
src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/build.make:62: recipe for target 'src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/autofill.cc.o' failed
make[6]: *** [src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/autofill.cc.o] Error 1
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc: In member function ‘nvidia::inferenceserver::Status nvidia::inferenceserver::PlanBackend::Context::InitializeConfigOutputBindings(const google::protobuf::RepeatedPtrField<nvidia::inferenceserver::ModelOutput>&, bool)’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:676:52: error: ‘class nvinfer1::ICudaEngine’ has no member named ‘getBindingFormat’; did you mean ‘getBindingName’?
     MemoryFormat fmt = ConvertTrtFmtToFmt(engine_->getBindingFormat(index));
                                                    ^~~~~~~~~~~~~~~~
                                                    getBindingName
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:676:75: error: ‘nvidia::inferenceserver::ConvertTrtFmtToFmt’ cannot be used as a function
     MemoryFormat fmt = ConvertTrtFmtToFmt(engine_->getBindingFormat(index));
                                                                           ^
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:703:51: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘getBindingDimensions’
       const nvinfer1::Dims output_dim = context_->getBindingDimensions(index);
                                                   ^~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc: In member function ‘virtual nvidia::inferenceserver::Status nvidia::inferenceserver::PlanBackend::Context::Run(const nvidia::inferenceserver::InferenceBackend*, std::vector<nvidia::inferenceserver::Scheduler::Payload>*)’:
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:896:22: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘setBindingDimensions’
       if (!context_->setBindingDimensions(binding_offset_ + bindex, this_dim)) {
                      ^~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:927:22: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘allInputDimensionsSpecified’
       if (!context_->allInputDimensionsSpecified()) {
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:932:22: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘enqueueV2’; did you mean ‘enqueue’?
       if (!context_->enqueueV2(buffers_, stream_, nullptr)) {
                      ^~~~~~~~~
                      enqueue
/home/nvidia/Documents/tensorrt-inference-server/src/backends/tensorrt/plan_backend.cc:967:24: error: ‘class nvinfer1::IExecutionContext’ has no member named ‘getBindingDimensions’
       dims = context_->getBindingDimensions(binding_offset_ + bindex);
                        ^~~~~~~~~~~~~~~~~~~~
[ 53%] Built target ensemble-backend-library
src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/build.make:158: recipe for target 'src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/plan_backend.cc.o' failed
make[6]: *** [src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/plan_backend.cc.o] Error 1
CMakeFiles/Makefile2:633: recipe for target 'src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/all' failed
make[5]: *** [src/backends/tensorrt/CMakeFiles/tensorrt-backend-library.dir/all] Error 2
make[5]: *** Waiting for unfinished jobs....
[ 53%] Built target http-endpoint-library
[ 55%] Built target custom-backend-library
[ 75%] Built target server-library
[ 75%] Built target grpc-endpoint-library
Makefile:129: recipe for target 'all' failed
make[4]: *** [all] Error 2
CMakeFiles/trtis.dir/build.make:115: recipe for target 'trtis/src/trtis-stamp/trtis-build' failed
make[3]: *** [trtis/src/trtis-stamp/trtis-build] Error 2
CMakeFiles/Makefile2:369: recipe for target 'CMakeFiles/trtis.dir/all' failed
make[2]: *** [CMakeFiles/trtis.dir/all] Error 2
CMakeFiles/Makefile2:381: recipe for target 'CMakeFiles/trtis.dir/rule' failed
make[1]: *** [CMakeFiles/trtis.dir/rule] Error 2
Makefile:222: recipe for target 'trtis' failed
make: *** [trtis] Error 2

My Jetson version is: 4.2.2; CUDA version is: 10.0.326

TRTIS isn’t officially supported on Jetson / ARM, but in this case it looks like you have just failed to install the TensorRT development package.