Trying to convert a Darknet_YOLOv3 frozen graph (.pb) from TensorFlow to TensorRT on Jetson AGX Xavier

Hello evereyone, i 'am trying convert a Darknet_YOLOv3 frozen graph (.pb) from TensorFlow to TensorRT by following the steps of this url and that’s the code i’ve made :

"""
TensorFlow to TensorRT converter with TensorFlow 1.15
Workflow with a fozen graph

"""

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt

with tf.compat.v1.Session() as sess:
    # First deserialize your frozen graph:
    with tf.io.gfile.GFile("tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb", 'rb') as f:
        frozen_graph = tf.compat.v1.GraphDef()
        frozen_graph.ParseFromString(f.read())
        # Now you can create a TensorRT inference graph from your
        # frozen graph:
    converter = trt.TrtGraphConverter(
	    input_graph_def=frozen_graph,
	    nodes_blacklist=['output_boxes']) #output nodes
    trt_graph = converter.convert()
    # Import the TensorRT graph into a new graph and run:
    output_node = tf.import_graph_def(
        trt_graph,
        return_elements=['output_boxes'])
    sess.run(output_node)

So when i execute the python file on my Jetson AGX Xavier, i have this OUTPUT :

 2021-04-22 11:03:28.275409: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

    2021-04-22 11:03:28.275503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 27313 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)

    2021-04-22 11:03:31.784527: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:486] There are 10 ops of 5 different types in the graph that are not converted to TensorRT: ResizeNearestNeighbor, ConcatV2, SplitV, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).

    2021-04-22 11:03:32.017294: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:647] Number of TensorRT candidate segments: 6

    2021-04-22 11:03:32.467664: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libnvinfer.so.7

    2021-04-22 11:03:32.706683: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libnvinfer_plugin.so.7

    Killed

How can i avoid this “Killed” and what is the cause of the latter? and how can i save the the frozen graph or the model after the conversion ?

if anyone has an idea … ?

Moving this topic from Frameworks to the Jetson AGX Xavier forum for better visibility.

Hi,

Usually, kill is caused by the out of memory.
What Xavier do you use? 8G, 16G or 32G?

Could you also monitor your device for memory first?

$ sudo tegrastats

Another thing is that based on the log, it seems the tf v2 API is used.
It’s also recommended to valid the usage by our document below:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example

Thanks.

Thanks for your answer it seems that’s using tensorflow on Jetson boards require too much GPU memory
That’s what i am having when i’am running tegrastats and executing my program at same time, the memory allocation of the RAM keep increasing :

RAM 1883/31927MB (lfb 7202x4MB) SWAP 0/15964MB (cached 0MB) CPU [12%@2188,6%@2188,33%@2188,15%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 6% AO@45C GPU@45.5C Tdiode@49.5C PMIC@100C AUX@45C CPU@46C thermal@45.45C Tboard@44C
RAM 2034/31927MB (lfb 7141x4MB) SWAP 0/15964MB (cached 0MB) CPU [28%@2188,23%@2188,47%@2188,22%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@46C Tdiode@49.75C PMIC@100C AUX@45C CPU@46.5C thermal@45.6C Tboard@44C
RAM 2222/31927MB (lfb 7076x4MB) SWAP 0/15964MB (cached 0MB) CPU [12%@2188,13%@2188,52%@2188,24%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 13% AO@45C GPU@46C Tdiode@49.5C PMIC@100C AUX@45C CPU@46.5C thermal@45.75C Tboard@44C
RAM 2439/31927MB (lfb 7007x4MB) SWAP 0/15964MB (cached 0MB) CPU [8%@2188,7%@2188,55%@2188,19%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@46C Tdiode@49.75C PMIC@100C AUX@44.5C CPU@46.5C thermal@45.6C Tboard@44C
RAM 2803/31927MB (lfb 6858x4MB) SWAP 0/15964MB (cached 0MB) CPU [4%@2188,4%@2188,40%@2188,4%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@45.5C Tdiode@49.5C PMIC@100C AUX@44.5C CPU@46.5C thermal@45.25C Tboard@44C
RAM 3550/31927MB (lfb 6671x4MB) SWAP 0/15964MB (cached 0MB) CPU [10%@2188,6%@2188,100%@2188,6%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@45.5C Tdiode@49.75C PMIC@100C AUX@44.5C CPU@46.5C thermal@45.25C Tboard@44C
RAM 3166/31927MB (lfb 6766x4MB) SWAP 0/15964MB (cached 0MB) CPU [6%@2188,3%@2188,98%@2188,2%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@45.5C Tdiode@49.5C PMIC@100C AUX@45C CPU@46.5C thermal@45.6C Tboard@44C
RAM 4609/31927MB (lfb 6406x4MB) SWAP 0/15964MB (cached 0MB) CPU [1%@2188,3%@2188,100%@2188,0%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46.5C Tdiode@49.5C PMIC@100C AUX@45C CPU@46.5C thermal@45.75C Tboard@44C
RAM 5102/31927MB (lfb 6282x4MB) SWAP 0/15964MB (cached 0MB) CPU [35%@2188,21%@2188,77%@2188,20%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@46C Tdiode@49.75C PMIC@100C AUX@45C CPU@47.5C thermal@45.75C Tboard@44C
RAM 5683/31927MB (lfb 6134x4MB) SWAP 0/15964MB (cached 0MB) CPU [63%@2188,36%@2188,100%@2188,32%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45C GPU@46.5C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48C thermal@46.2C Tboard@44C
RAM 6495/31927MB (lfb 5915x4MB) SWAP 0/15964MB (cached 0MB) CPU [100%@2188,62%@2188,75%@2188,37%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.55C Tboard@44C
RAM 6571/31927MB (lfb 5895x4MB) SWAP 0/15964MB (cached 0MB) CPU [94%@2188,65%@2188,98%@2188,61%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 5% AO@45.5C GPU@46C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48C thermal@46.55C Tboard@44C
RAM 6747/31927MB (lfb 5830x4MB) SWAP 0/15964MB (cached 0MB) CPU [57%@2188,72%@2188,85%@2188,64%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 2% AO@45.5C GPU@46C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48C thermal@46.55C Tboard@44C
RAM 6990/31927MB (lfb 5752x4MB) SWAP 0/15964MB (cached 0MB) CPU [30%@2188,100%@2188,40%@2188,45%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 20% AO@45.5C GPU@46.5C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48C thermal@46.4C Tboard@44C
RAM 8315/31927MB (lfb 5411x4MB) SWAP 0/15964MB (cached 0MB) CPU [15%@2188,100%@2188,50%@2188,78%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48C thermal@46.4C Tboard@44C
RAM 12813/31927MB (lfb 4284x4MB) SWAP 0/15964MB (cached 0MB) CPU [14%@2188,69%@2188,34%@2188,100%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46C Tdiode@50C PMIC@100C AUX@45.5C CPU@48C thermal@46.4C Tboard@44C
RAM 17146/31927MB (lfb 3201x4MB) SWAP 0/15964MB (cached 0MB) CPU [83%@2188,91%@2188,77%@2188,100%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@48.5C Tdiode@49.75C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.4C Tboard@44C
RAM 21701/31927MB (lfb 2062x4MB) SWAP 0/15964MB (cached 0MB) CPU [46%@2188,100%@2188,23%@2188,100%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@47.5C Tdiode@50C PMIC@100C AUX@45.5C CPU@48C thermal@46.55C Tboard@44C
RAM 26288/31927MB (lfb 914x4MB) SWAP 0/15964MB (cached 0MB) CPU [54%@2188,31%@2188,66%@2188,64%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@48C Tdiode@50.25C PMIC@100C AUX@45.5C CPU@48C thermal@46.4C Tboard@44C
RAM 29597/31927MB (lfb 190x4MB) SWAP 1/15964MB (cached 0MB) CPU [100%@2188,23%@2188,100%@2188,41%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@46.5C Tdiode@50C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.4C Tboard@44C
RAM 30805/31927MB (lfb 190x4MB) SWAP 15/15964MB (cached 0MB) CPU [100%@2188,95%@2188,74%@2188,68%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46C Tdiode@50C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.7C Tboard@44C
RAM 31115/31927MB (lfb 177x4MB) SWAP 186/15964MB (cached 0MB) CPU [100%@2188,84%@2188,73%@2188,100%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@48.5C Tdiode@50C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.7C Tboard@44C
RAM 31171/31927MB (lfb 163x4MB) SWAP 434/15964MB (cached 1MB) CPU [94%@2188,74%@2188,69%@2188,100%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@45.5C GPU@46.5C Tdiode@50.25C PMIC@100C AUX@46C CPU@48.5C thermal@46.7C Tboard@44C
RAM 31285/31927MB (lfb 136x4MB) SWAP 717/15964MB (cached 12MB) CPU [71%@2188,100%@2188,100%@2188,89%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@48.5C Tdiode@50.5C PMIC@100C AUX@45.5C CPU@48.5C thermal@47.3C Tboard@44C
RAM 31411/31927MB (lfb 108x4MB) SWAP 1025/15964MB (cached 12MB) CPU [99%@2188,100%@2188,100%@2188,99%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@49C Tdiode@50C PMIC@100C AUX@45.5C CPU@49C thermal@47.3C Tboard@44C
RAM 31585/31927MB (lfb 68x4MB) SWAP 1392/15964MB (cached 12MB) CPU [99%@2188,95%@2188,99%@2188,98%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@46.5C Tdiode@50.25C PMIC@100C AUX@45.5C CPU@48.5C thermal@46.7C Tboard@44C
RAM 31719/31927MB (lfb 29x4MB) SWAP 2037/15964MB (cached 12MB) CPU [80%@2188,96%@2188,95%@2188,82%@2188,off,off,off,off] EMC_FREQ 0% GR3D_FREQ 0% AO@46C GPU@46.5C Tdiode@50.5C PMIC@100C AUX@46C CPU@48.5C thermal@46.85C Tboard@44C

I’am using 32G AGX Xavier and i tried to free memory with the command :

free -h && sudo sysctl vm.drop_caches=3 && free -h

And that’s what i’am having as OUTPUT :

2021-04-22 17:00:38.831465: I tensorflow/core/common_runtime/bfc_allocator.cc:872] Bin (134217728): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2021-04-22 17:00:38.831566: I tensorflow/core/common_runtime/bfc_allocator.cc:872] Bin (268435456): 	Total Chunks: 1, Chunks in use: 1. 509.19MiB allocated for chunks. 509.19MiB in use in bin. 412.30MiB client-requested in use in bin.
2021-04-22 17:00:38.831652: I tensorflow/core/common_runtime/bfc_allocator.cc:888] Bin for 1.98MiB was 1.00MiB, Chunk State: 
2021-04-22 17:00:38.831694: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Next region of size 533921792
2021-04-22 17:00:38.831746: I tensorflow/core/common_runtime/bfc_allocator.cc:908] InUse at 0x218a7a000 next 18446744073709551615 of size 533921792
2021-04-22 17:00:38.831787: I tensorflow/core/common_runtime/bfc_allocator.cc:917]      Summary of in-use Chunks by size: 
2021-04-22 17:00:38.831847: I tensorflow/core/common_runtime/bfc_allocator.cc:920] 1 Chunks of size 533921792 totalling 509.19MiB
2021-04-22 17:00:38.831895: I tensorflow/core/common_runtime/bfc_allocator.cc:924] Sum Total of in-use chunks: 509.19MiB
2021-04-22 17:00:38.831934: I tensorflow/core/common_runtime/bfc_allocator.cc:926] total_region_allocated_bytes_: 533921792 memory_limit_: 533921792 available bytes: 0 curr_region_allocation_bytes_: 1067843584
2021-04-22 17:00:38.832132: I tensorflow/core/common_runtime/bfc_allocator.cc:932] Stats: 
Limit:                   533921792
InUse:                   533921792
MaxInUse:                533921792
NumAllocs:                       1
MaxAllocSize:            533921792

2021-04-22 17:00:38.832198: W tensorflow/core/common_runtime/bfc_allocator.cc:427] *********************************************************************************xxxxxxxxxxxxxxxxxxx
2021-04-22 17:00:38.832288: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger Requested amount of GPU memory (2076672 bytes) could not be allocated. There may not be enough free memory for allocation to succeed.
2021-04-22 17:00:38.832357: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (181) - OutOfMemory Error in GpuMemory: 0
2021-04-22 17:00:39.150223: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger Out of memory error during getBestTactic: (Unnamed Layer* 0) [Constant] + (Unnamed Layer* 1) [ElementWise]
2021-04-22 17:00:39.150387: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger Try increasing the workspace size with IBuilderConfig::setMaxWorkspaceSize() if using IBuilder::buildEngineWithConfig, or IBuilder::setMaxWorkspaceSize() if using IBuilder::buildCudaEngine.
2021-04-22 17:00:39.150523: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node (Unnamed Layer* 0) [Constant] + (Unnamed Layer* 1) [ElementWise].)
2021-04-22 17:00:39.157632: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node (Unnamed Layer* 0) [Constant] + (Unnamed Layer* 1) [ElementWise].)
2021-04-22 17:00:39.160786: W tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:751] TensorRT node TRTEngineOp_0 added for segment 0 consisting of 397 nodes failed: Internal: Failed to build TensorRT engine. Fallback to TF...
2021-04-22 17:00:57.155215: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:748] TensorRT node detector/yolo-v3/TRTEngineOp_1 added for segment 1 consisting of 52 nodes succeeded.
2021-04-22 17:01:01.260716: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:748] TensorRT node detector/yolo-v3/TRTEngineOp_2 added for segment 2 consisting of 44 nodes succeeded.
2021-04-22 17:01:01.902294: W tensorflow/compiler/tf2tensorrt/convert/trt_optimization_pass.cc:183] TensorRTOptimizer is probably called on funcdef! This optimizer must *NOT* be called on function objects.
2021-04-22 17:01:01.983073: W tensorflow/compiler/tf2tensorrt/convert/trt_optimization_pass.cc:183] TensorRTOptimizer is probably called on funcdef! This optimizer must *NOT* be called on function objects.
2021-04-22 17:01:02.011498: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:821] Optimization results for grappler item: tf_graph
2021-04-22 17:01:02.011629: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 484 nodes (0), 656 edges (0), time = 1292.104ms.
2021-04-22 17:01:02.011722: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   layout: Graph size after: 507 nodes (23), 679 edges (23), time = 677.796ms.
2021-04-22 17:01:02.011789: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 505 nodes (-2), 672 edges (-7), time = 489.878ms.
2021-04-22 17:01:02.011825: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   TensorRTOptimizer: Graph size after: 411 nodes (-94), 552 edges (-120), time = 38363.5938ms.
2021-04-22 17:01:02.011869: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 411 nodes (0), 552 edges (0), time = 299.142ms.
2021-04-22 17:01:02.011896: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:821] Optimization results for grappler item: detector/yolo-v3/TRTEngineOp_1_native_segment
2021-04-22 17:01:02.011920: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 57 nodes (0), 69 edges (0), time = 25.955ms.
2021-04-22 17:01:02.011939: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   layout: Graph size after: 57 nodes (0), 69 edges (0), time = 30.07ms.
2021-04-22 17:01:02.011955: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 57 nodes (0), 69 edges (0), time = 26.896ms.
2021-04-22 17:01:02.011992: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   TensorRTOptimizer: Graph size after: 57 nodes (0), 69 edges (0), time = 3.295ms.
2021-04-22 17:01:02.012017: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 57 nodes (0), 69 edges (0), time = 27.544ms.
2021-04-22 17:01:02.012038: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:821] Optimization results for grappler item: detector/yolo-v3/TRTEngineOp_2_native_segment
2021-04-22 17:01:02.012058: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 48 nodes (0), 58 edges (0), time = 10.13ms.
2021-04-22 17:01:02.012080: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   layout: Graph size after: 48 nodes (0), 58 edges (0), time = 10.448ms.
2021-04-22 17:01:02.012100: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 48 nodes (0), 58 edges (0), time = 9.681ms.
2021-04-22 17:01:02.012119: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   TensorRTOptimizer: Graph size after: 48 nodes (0), 58 edges (0), time = 1.092ms.
2021-04-22 17:01:02.012138: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:823]   constant_folding: Graph size after: 48 nodes (0), 58 edges (0), time = 9.298ms.

Is there any other solution to convert my frozen graph with tf-trt and avoiding the out of memory ? cuz i followed that are in the url you provided me and that problem of memory allocation is blocking the conversion

Thanks for your answer

Hi,

It seems that memory usage is much larger than our expectation.
Do you use a standard YOLOv3 model or custom one?

For a standard YOLOv3, below are some tutorial for it and it doesn’t require too much memory.

$ /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo/

$ /usr/src/tensorrt/samples/python/yolov3_onnx/

Thanks.

1 Like

@AastaLLL Thank you for answer, i tried to install onnx but i find some complications

Defaulting to user installation because normal site-packages is not writeable
Collecting onnx
  Downloading onnx-1.9.0.tar.gz (9.8 MB)
     |████████████████████████████████| 9.8 MB 3.8 MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
    Preparing wheel metadata ... done
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.6/dist-packages (from onnx) (3.7.4.3)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.6/dist-packages (from onnx) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from onnx) (1.15.0)
Requirement already satisfied: protobuf in /usr/local/lib/python3.6/dist-packages (from onnx) (3.15.7)
Building wheels for collected packages: onnx
  Building wheel for onnx (PEP 517) ... error
  ERROR: Command errored out with exit status 1:
   command: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpb1sjim8r
       cwd: /tmp/pip-install-lx3tyq7a/onnx_3b031bbdad8d4fa79d4b1c11dfed8cb4
  Complete output (80 lines):
  fatal: not a git repository (or any of the parent directories): .git
  running bdist_wheel
  running build
  running build_py
  running create_version
  running cmake_build
  Using cmake args: ['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/tmp/pip-install-lx3tyq7a/onnx_3b031bbdad8d4fa79d4b1c11dfed8cb4']
  -- The C compiler identification is GNU 7.5.0
  -- The CXX compiler identification is GNU 7.5.0
  -- Check for working C compiler: /usr/bin/cc
  -- Check for working C compiler: /usr/bin/cc -- works
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Check for working CXX compiler: /usr/bin/c++
  -- Check for working CXX compiler: /usr/bin/c++ -- works
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Found PythonInterp: /usr/bin/python3 (found version "3.6.9")
  -- Found PythonLibs: /usr/lib/aarch64-linux-gnu/libpython3.6m.so (found version "3.6.9")
  Generated: /tmp/pip-install-lx3tyq7a/onnx_3b031bbdad8d4fa79d4b1c11dfed8cb4/.setuptools-cmake-build/onnx/onnx-ml.proto
  CMake Error at CMakeLists.txt:292 (message):
    Protobuf compiler not found
  Call Stack (most recent call first):
    CMakeLists.txt:323 (relative_protobuf_generate_cpp)
  
  
  -- Configuring incomplete, errors occurred!
  See also "/tmp/pip-install-lx3tyq7a/onnx_3b031bbdad8d4fa79d4b1c11dfed8cb4/.setuptools-cmake-build/CMakeFiles/CMakeOutput.log".
  Traceback (most recent call last):
    File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
      main()
    File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/_in_process.py", line 205, in build_wheel
      metadata_directory)
    File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 230, in build_wheel
      wheel_directory, config_settings)
    File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
      self.run_setup()
    File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 267, in run_setup
      self).run_setup(setup_script=setup_script)
    File "/usr/local/lib/python3.6/dist-packages/setuptools/build_meta.py", line 158, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 359, in <module>
      'backend-test-tools = onnx.backend.test.cmd_tools:main',
    File "/usr/local/lib/python3.6/dist-packages/setuptools/__init__.py", line 163, in setup
      return distutils.core.setup(**attrs)
    File "/usr/lib/python3.6/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands
      self.run_command(cmd)
    File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "/usr/local/lib/python3.6/dist-packages/wheel/bdist_wheel.py", line 299, in run
      self.run_command('build')
    File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "setup.py", line 233, in run
      self.run_command('cmake_build')
    File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
      cmd_obj.run()
    File "setup.py", line 219, in run
      subprocess.check_call(cmake_args)
    File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
      raise CalledProcessError(retcode, cmd)
  subprocess.CalledProcessError: Command '['/usr/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/include/python3.6m', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-36m-aarch64-linux-gnu.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/tmp/pip-install-lx3tyq7a/onnx_3b031bbdad8d4fa79d4b1c11dfed8cb4']' returned non-zero exit status 1.
  ----------------------------------------
  ERROR: Failed building wheel for onnx
Failed to build onnx
ERROR: Could not build wheels for onnx which use PEP 517 and cannot be installed directly

Please can you suggest me a solution to install onnx ? because i couldn’t install it with pip

Hi,

Please try the below command:

$ sudo apt-get install libprotobuf-dev protobuf-compiler python3-pip
$ pip3 install cython
$ pip3 install onnx

Thanks.

1 Like

Thank you @AastaLLL

That worked for me !