Hello @dusty_nv ,
Thank you for your reply :)
I will try to rebuild the repo and let you know.
Harry
Hello @dusty_nv ,
Thank you for your reply :)
I will try to rebuild the repo and let you know.
Harry
Hello @dusty_nv ,
I rebuild the repo as you suggested to me.
I have initialised the MLPERF_SCRATCH_PATH
Now I am facing an issue when running python3 code/resnet50/tensorrt/preprocess_data.py
please have a look to my logg below:
Traceback (most recent call last):
File "code/resnet50/tensorrt/preprocess_data.py", line 23, in <module>
from code.common.fix_sys_path import ScopedRestrictedImport
ModuleNotFoundError: No module named 'code.common'; 'code' is not a package
However, the folders code and common exist and the python module fix_sys_path.py
has indeed ScopedRestrictedImport
class.
NOTE: I am running the script from closed/NVIDIA
Best regards,
Harry
Hi Harry, could you try make preprocess_data
? You might find other instructions in the readme file helpful: https://github.com/mlcommons/inference_results_v2.1/tree/master/closed/NVIDIA#preprocessing-the-datasets-for-inference
In addition, we run python scripts as modules, e.g. python3 -m code.resnet50.tensorrt.preprocess_data --data_dir=$(DATA_DIR) --preprocessed_data_dir=$(PREPROCESSED_DATA_DIR)
Hello Dustin,
Thank you for your help :)
I tried runing make preprocess_data
and python modules as well but I get always the same error I think it is from openCV. Apparently when I use the python modules OpenCV is not very happy. Here is my error logs below:
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/resnet50/tensorrt/preprocess_data.py", line 26, in <module>
import cv2
File "/usr/lib/python3.8/dist-packages/cv2/__init__.py", line 180, in <module>
bootstrap()
File "/usr/lib/python3.8/dist-packages/cv2/__init__.py", line 132, in bootstrap
sys.path.insert(1 if not applySysPathWorkaround else 0, p)
AttributeError: 'tuple' object has no attribute 'insert'
NOTE: I have indeed the datasets in the right place in MLPERF_SCRATCH_PATH.
Thank you :)
Harry
I got the same issue
Hmm okay…are you normally able to import cv2? Does python3 -c 'import cv2'
work for you?
When I type python3 -c 'import cv2'
in my terminal I got nothing (no errors, etc…), so I think it is working.
I can import cv2 from the python interpreter as well without any error.
Hi Harry, can you try removing the ScopedRestrictedImport()
from here: (https://github.com/mlcommons/inference_results_v2.1/blob/master/closed/NVIDIA/code/resnet50/tensorrt/preprocess_data.py#L24)
Hello Dustin,
I have passed over OpenCV error by removing ScopedRestrictedImport()
as you suggested to me.
However, I am facing a new issue. I think it comes from the plugins. Please have a look on my Error log below:
make[1]: Entering directory '/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA'
[2022-11-07 10:08:15,252 main_v2.py:221 INFO] Detected system ID: KnownSystem.Orin
[2022-11-07 10:08:19,469 generate_engines.py:172 INFO] Building engines for resnet50 benchmark in Offline scenario...
[2022-11-07 10:08:19,548 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/07/2022-10:08:19] [TRT] [I] [MemUsageChange] Init CUDA: CPU +213, GPU +0, now: CPU 257, GPU 5728 (MiB)
[11/07/2022-10:08:22] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +351, GPU +333, now: CPU 627, GPU 6079 (MiB)
[2022-11-07 10:08:22,876 builder.py:107 INFO] Using DLA: Core 0
[2022-11-07 10:08:25,126 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-07 10:08:25,127 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-07 10:08:25,127 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-07 10:08:25,127 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-07 10:08:25,130 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-07 10:08:25,131 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-07 10:08:25,212 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/07/2022-10:08:25] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-07 10:08:25,214 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.lwis_k_99_MaxP.plan
[11/07/2022-10:08:25] [TRT] [W] Layer 'topk_layer': Unsupported on DLA. Switching this layer's device type to GPU.
[11/07/2022-10:08:25] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/07/2022-10:08:25] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/07/2022-10:08:25] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/07/2022-10:08:29] [TRT] [I] ---------- Layers Running on DLA ----------
[11/07/2022-10:08:29] [TRT] [I] [DlaLayer] {ForeignNode[conv1...fc_replaced]}
[11/07/2022-10:08:29] [TRT] [I] ---------- Layers Running on GPU ----------
[11/07/2022-10:08:29] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/07/2022-10:08:31] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +535, GPU +302, now: CPU 1655, GPU 7065 (MiB)
[11/07/2022-10:08:31] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +85, GPU +83, now: CPU 1740, GPU 7148 (MiB)
[11/07/2022-10:08:31] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/07/2022-10:08:36] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/07/2022-10:08:37] [TRT] [I] Total Host Persistent Memory: 848
[11/07/2022-10:08:37] [TRT] [I] Total Device Persistent Memory: 0
[11/07/2022-10:08:37] [TRT] [I] Total Scratch Memory: 1024
[11/07/2022-10:08:37] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 31 MiB
[11/07/2022-10:08:37] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 0.012416ms to assign 3 blocks to 3 nodes requiring 65536 bytes.
[11/07/2022-10:08:37] [TRT] [I] Total Activation Memory: 65536
[11/07/2022-10:08:37] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +26, GPU +4, now: CPU 26, GPU 4 (MiB)
[11/07/2022-10:08:37] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-07 10:08:37,338 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-07 10:08:37,339 builder.py:211 INFO] Layers:
{ForeignNode[conv1...fc_replaced]}
Reformatting CopyNode for Input Tensor 0 to topk_layer
input_tensor_0 finish
fc_replaced_out_0 finish
topk_layer
Bindings:
input_tensor_0
topk_layer_output_index
[11/07/2022-10:08:37] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/07/2022-10:08:37] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-07 10:08:37,652 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/07/2022-10:08:37] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1453, GPU 7110 (MiB)
[2022-11-07 10:08:38,283 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-07 10:08:38,283 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-07 10:08:38,284 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-07 10:08:38,284 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-07 10:08:38,286 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-07 10:08:38,287 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-07 10:08:38,289 rn50_graphsurgeon.py:580 INFO] Fusing ops in res2_mega
[2022-11-07 10:08:38,292 rn50_graphsurgeon.py:693 INFO] Plugin RES2_FULL_FUSION successful
[2022-11-07 10:08:38,292 rn50_graphsurgeon.py:499 INFO] Replacing all branch2c beta=1 conv with smallk kernel.
[2022-11-07 10:08:38,292 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,293 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,293 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,293 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,293 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,293 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,294 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,295 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:08:38,297 rn50_graphsurgeon.py:573 INFO] Plugin SmallTileGEMM_TRT fused successful for res3/4/5 branch2c
[11/07/2022-10:08:38] [TRT] [I] No importer registered for op: RnRes2FullFusion_TRT. Attempting to import as plugin.
[11/07/2022-10:08:38] [TRT] [I] Searching for plugin: RnRes2FullFusion_TRT, plugin_version: 1, plugin_namespace:
[11/07/2022-10:08:38] [TRT] [I] Successfully created plugin: RnRes2FullFusion_TRT
[11/07/2022-10:08:38] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/07/2022-10:08:38] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/07/2022-10:08:38] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:08:38] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:08:38] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:08:38] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:08:38] [TRT] [F] Validation failed: false
plugin/smallTileGEMMPlugin/smallTileGEMMPlugin.cpp:520
[11/07/2022-10:08:38] [TRT] [E] std::exception
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py", line 185, in subprocess_target
return self.action_handler.handle()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_engines.py", line 175, in handle
total_engine_build_time += self.build_engine(job)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_engines.py", line 166, in build_engine
builder.build_engines()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/common/builder.py", line 170, in build_engines
self.initialize()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/resnet50/tensorrt/ResNet50.py", line 87, in initialize
raise RuntimeError(f"ResNet50 onnx model processing failed! Error: {err_desc}")
RuntimeError: ResNet50 onnx model processing failed! Error: Assertion failed: plugin && "Could not create plugin"
[2022-11-07 10:08:39,914 generate_engines.py:172 INFO] Building engines for resnet50 benchmark in Offline scenario...
[2022-11-07 10:08:39,972 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/07/2022-10:08:40] [TRT] [I] [MemUsageChange] Init CUDA: CPU +213, GPU +0, now: CPU 257, GPU 5821 (MiB)
[11/07/2022-10:08:42] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +351, GPU +337, now: CPU 627, GPU 6180 (MiB)
[2022-11-07 10:08:42,798 builder.py:107 INFO] Using DLA: Core 0
[2022-11-07 10:08:43,389 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-07 10:08:43,390 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-07 10:08:43,390 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-07 10:08:43,390 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-07 10:08:43,393 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-07 10:08:43,393 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-07 10:08:43,472 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/07/2022-10:08:43] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-07 10:08:43,473 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.lwis_k_99_MaxP.plan
[11/07/2022-10:08:43] [TRT] [W] Layer 'topk_layer': Unsupported on DLA. Switching this layer's device type to GPU.
[11/07/2022-10:08:43] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/07/2022-10:08:43] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/07/2022-10:08:43] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/07/2022-10:08:48] [TRT] [I] ---------- Layers Running on DLA ----------
[11/07/2022-10:08:48] [TRT] [I] [DlaLayer] {ForeignNode[conv1...fc_replaced]}
[11/07/2022-10:08:48] [TRT] [I] ---------- Layers Running on GPU ----------
[11/07/2022-10:08:48] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/07/2022-10:08:49] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +535, GPU +259, now: CPU 1655, GPU 7185 (MiB)
[11/07/2022-10:08:50] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +85, GPU +88, now: CPU 1740, GPU 7273 (MiB)
[11/07/2022-10:08:50] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/07/2022-10:08:57] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/07/2022-10:08:57] [TRT] [I] Total Host Persistent Memory: 848
[11/07/2022-10:08:57] [TRT] [I] Total Device Persistent Memory: 0
[11/07/2022-10:08:57] [TRT] [I] Total Scratch Memory: 1024
[11/07/2022-10:08:57] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 31 MiB
[11/07/2022-10:08:57] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 0.010496ms to assign 3 blocks to 3 nodes requiring 65536 bytes.
[11/07/2022-10:08:57] [TRT] [I] Total Activation Memory: 65536
[11/07/2022-10:08:57] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +26, GPU +4, now: CPU 26, GPU 4 (MiB)
[11/07/2022-10:08:57] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-07 10:08:57,709 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-07 10:08:57,709 builder.py:211 INFO] Layers:
{ForeignNode[conv1...fc_replaced]}
Reformatting CopyNode for Input Tensor 0 to topk_layer
input_tensor_0 finish
fc_replaced_out_0 finish
topk_layer
Bindings:
input_tensor_0
topk_layer_output_index
[11/07/2022-10:08:57] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/07/2022-10:08:57] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-07 10:08:59,325 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/07/2022-10:08:59] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1453, GPU 7323 (MiB)
[2022-11-07 10:09:03,892 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-07 10:09:03,893 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-07 10:09:03,893 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-07 10:09:03,893 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-07 10:09:03,896 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-07 10:09:03,896 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-07 10:09:03,899 rn50_graphsurgeon.py:580 INFO] Fusing ops in res2_mega
[2022-11-07 10:09:03,901 rn50_graphsurgeon.py:693 INFO] Plugin RES2_FULL_FUSION successful
[2022-11-07 10:09:03,901 rn50_graphsurgeon.py:499 INFO] Replacing all branch2c beta=1 conv with smallk kernel.
[2022-11-07 10:09:03,902 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,902 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,902 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,902 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,902 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,903 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,903 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,903 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,903 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,903 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,904 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,904 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,904 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu with smallk...
[2022-11-07 10:09:03,906 rn50_graphsurgeon.py:573 INFO] Plugin SmallTileGEMM_TRT fused successful for res3/4/5 branch2c
[11/07/2022-10:09:03] [TRT] [I] No importer registered for op: RnRes2FullFusion_TRT. Attempting to import as plugin.
[11/07/2022-10:09:03] [TRT] [I] Searching for plugin: RnRes2FullFusion_TRT, plugin_version: 1, plugin_namespace:
[11/07/2022-10:09:03] [TRT] [I] Successfully created plugin: RnRes2FullFusion_TRT
[11/07/2022-10:09:03] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/07/2022-10:09:03] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/07/2022-10:09:03] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:09:03] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:09:03] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:09:03] [TRT] [W] builtin_op_importers.cpp:4714: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/07/2022-10:09:03] [TRT] [F] Validation failed: false
plugin/smallTileGEMMPlugin/smallTileGEMMPlugin.cpp:520
[11/07/2022-10:09:03] [TRT] [E] std::exception
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py", line 185, in subprocess_target
return self.action_handler.handle()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_engines.py", line 175, in handle
total_engine_build_time += self.build_engine(job)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_engines.py", line 166, in build_engine
builder.build_engines()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/common/builder.py", line 170, in build_engines
self.initialize()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/resnet50/tensorrt/ResNet50.py", line 87, in initialize
raise RuntimeError(f"ResNet50 onnx model processing failed! Error: {err_desc}")
RuntimeError: ResNet50 onnx model processing failed! Error: Assertion failed: plugin && "Could not create plugin"
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/main_v2.py", line 223, in <module>
main(main_args, DETECTED_SYSTEM)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/main_v2.py", line 147, in main
dispatch_action(main_args, config_dict, workload_setting)
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/main_v2.py", line 194, in dispatch_action
handler.run()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py", line 79, in run
self.handle_failure()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py", line 182, in handle_failure
self.action_handler.handle_failure()
File "/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_engines.py", line 183, in handle_failure
raise RuntimeError("Building engines failed!")
RuntimeError: Building engines failed!
make[1]: *** [Makefile:694: generate_engines] Error 1
make[1]: Leaving directory '/MLPerfv2.1/inference_results_v2.1/closed/NVIDIA'
make: *** [Makefile:688: run] Error 2
The command was :
$ make run RUN_ARGS="--benchmarks=resnet50 --scenarios=Offline --test_mode=AccuracyOnly"
I tried to run this one as well, the issue is pretty much the same as the first one.
$ make run RUN_ARGS="--benchmarks=resnet50 --scenarios=Offline --test_mode=PerformanceOnly"
Thank you in advance.
Best regards
Harry
The smallTileGEMMPlugin
is added to Jetson Orin from TensorRT 8.5. Please make sure you are using the same SW as listed here
(inference_results_v2.1/README_Jetson.md at master · mlcommons/inference_results_v2.1 · GitHub).`
Hello Dustin,
Thank you for your reply, I have the last JetPack on my Jetson AGX Orin which is the JetPack 5.0.2
I think I cannot upgrade TensorRT version nor cuDNN since it comes with JetPack. I have already tried last time to upgrade the TensorRT version (see here) but I ended up with breaking the whole JetPack and re-flashed the Jetson again.
Is this MLPerfs benchmark is compatible with the JetPack 5.0.2?
Thank you :)
Harry
Hi Harry, it’s compatible with the 22.08 CUDA-X AI Developer Preview, which has TensorRT 8.5 for Orin: https://developer.nvidia.com/embedded/22.08-jetson-cuda-x-ai-developer-preview
Here are the instructions for installing it: https://developer.nvidia.com/sites/default/files/akamai/embedded/22.08_Jetson_CUDA-X_AI_DP-Install_Guide.pdf
Have you reproduced the MLPerf results v2.1 successfully?
Hello @113736752 ,
I have not. It requires a lot of time to flash with the CUDA-X and then after the benchmark re-flash it with the JetPack 5.0.2 and I need third party ubuntu machine to flash the Jetson AGX Orin from it using the SDK manager.
But if you have succeeded to reproduce the results v2.1 let me now please, I am interrested.
Thank you
Hello, @dusty_nv ,
I installed the 22.08 CUDA-X AI Developer Preview following the instructions, and removing ScopedRestrictedImport()
as you suggested, then I can run "make preprocess_data BENCHMARKS=resnet50 " without error, but I can’t make generate_engines and run the performance, you can find the error log below:
zack@zack-desktop:~/inference_results_v2.1/closed/NVIDIA$ make generate_engines RUN_ARGS=“–benchmarks=resnet50 --scenarios=Offline”
[2022-11-14 21:04:50,728 main_v2.py:221 INFO] Detected system ID: KnownSystem.Orin
[2022-11-14 21:04:51,738 generate_engines.py:172 INFO] Building engines for resnet50 benchmark in Offline scenario…
[2022-11-14 21:04:51,832 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/14/2022-21:04:52] [TRT] [I] [MemUsageChange] Init CUDA: CPU +215, GPU +0, now: CPU 268, GPU 9145 (MiB)
[11/14/2022-21:04:54] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +346, GPU +497, now: CPU 636, GPU 9697 (MiB)
[2022-11-14 21:04:54,366 builder.py:107 INFO] Using DLA: Core 0
[2022-11-14 21:04:54,776 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-14 21:04:54,776 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-14 21:04:54,777 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-14 21:04:54,777 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-14 21:04:54,779 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-14 21:04:54,779 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-14 21:04:54,851 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/14/2022-21:04:54] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-14 21:04:54,859 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.lwis_k_99_MaxP.plan
[11/14/2022-21:04:54] [TRT] [W] Layer ‘topk_layer’: Unsupported on DLA. Switching this layer’s device type to GPU.
[11/14/2022-21:04:54] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/14/2022-21:04:54] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/14/2022-21:04:54] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/14/2022-21:04:58] [TRT] [I] ---------- Layers Running on DLA ----------
[11/14/2022-21:04:58] [TRT] [I] [DlaLayer] {ForeignNode[conv1 + scale_conv1…fc_replaced]}
[11/14/2022-21:04:58] [TRT] [I] ---------- Layers Running on GPU ----------
[11/14/2022-21:04:58] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/14/2022-21:05:02] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +921, now: CPU 1662, GPU 11685 (MiB)
[11/14/2022-21:05:02] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +86, GPU +180, now: CPU 1748, GPU 11865 (MiB)
[11/14/2022-21:05:02] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/14/2022-21:05:07] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-21:05:08] [TRT] [I] Total Host Persistent Memory: 48
[11/14/2022-21:05:08] [TRT] [I] Total Device Persistent Memory: 0
[11/14/2022-21:05:08] [TRT] [I] Total Scratch Memory: 1024
[11/14/2022-21:05:08] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 31 MiB
[11/14/2022-21:05:08] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 0.013248ms to assign 3 blocks to 4 nodes requiring 80896 bytes.
[11/14/2022-21:05:08] [TRT] [I] Total Activation Memory: 80896
[11/14/2022-21:05:08] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +25, GPU +4, now: CPU 25, GPU 4 (MiB)
[11/14/2022-21:05:08] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-14 21:05:08,346 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-14 21:05:08,346 builder.py:211 INFO] Layers:
{ForeignNode[conv1 + scale_conv1…fc_replaced]}
Reformatting CopyNode for Input Tensor 0 to topk_layer
topk_layer
Bindings:
input_tensor_0
topk_layer_output_index
[11/14/2022-21:05:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/14/2022-21:05:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-14 21:05:08,371 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/14/2022-21:05:08] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1461, GPU 11904 (MiB)
[2022-11-14 21:05:08,616 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-14 21:05:08,616 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-14 21:05:08,616 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-14 21:05:08,616 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-14 21:05:08,618 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-14 21:05:08,618 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-14 21:05:08,621 rn50_graphsurgeon.py:580 INFO] Fusing ops in res2_mega
[2022-11-14 21:05:08,623 rn50_graphsurgeon.py:693 INFO] Plugin RES2_FULL_FUSION successful
[2022-11-14 21:05:08,623 rn50_graphsurgeon.py:499 INFO] Replacing all branch2c beta=1 conv with smallk kernel.
[2022-11-14 21:05:08,623 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,624 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,625 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu with smallk…
[2022-11-14 21:05:08,627 rn50_graphsurgeon.py:573 INFO] Plugin SmallTileGEMM_TRT fused successful for res3/4/5 branch2c
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: RnRes2FullFusion_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: RnRes2FullFusion_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: RnRes2FullFusion_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-21:05:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-21:05:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-21:05:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[2022-11-14 21:05:08,750 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/14/2022-21:05:08] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-14 21:05:08,751 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-gpu-b256-int8.lwis_k_99_MaxP.plan
[11/14/2022-21:05:08] [TRT] [W] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[11/14/2022-21:05:08] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/14/2022-21:05:08] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/14/2022-21:05:08] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/14/2022-21:05:08] [TRT] [I] ---------- Layers Running on DLA ----------
[11/14/2022-21:05:08] [TRT] [I] ---------- Layers Running on GPU ----------
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONV_ACT_POOL: conv1 + scale_conv1 + conv1_relu + pool1
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: RES2_FULL_FUSION
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch2a + res3a_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch1
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch2b + res3a_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3b_branch2a + res3b_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3b_branch2b + res3b_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3c_branch2a + res3c_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3c_branch2b + res3c_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3d_branch2a + res3d_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3d_branch2b + res3d_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch2a + res4a_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch1
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch2b + res4a_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4b_branch2a + res4b_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4b_branch2b + res4b_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4c_branch2a + res4c_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4c_branch2b + res4c_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4d_branch2a + res4d_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4d_branch2b + res4d_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4e_branch2a + res4e_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4e_branch2b + res4e_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4f_branch2a + res4f_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4f_branch2b + res4f_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch1
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch2a + res5a_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch2b + res5a_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5b_branch2a + res5b_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5b_branch2b + res5b_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5c_branch2a + res5c_branch2a_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5c_branch2b + res5c_branch2b_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] POOLING: squeeze_replaced
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] CONVOLUTION: fc_replaced
[11/14/2022-21:05:08] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/14/2022-21:05:08] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1857, GPU 12188 (MiB)
[11/14/2022-21:05:08] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1857, GPU 12188 (MiB)
[11/14/2022-21:05:08] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/14/2022-21:05:52] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[11/14/2022-21:07:48] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-21:07:49] [TRT] [I] Total Host Persistent Memory: 81568
[11/14/2022-21:07:49] [TRT] [I] Total Device Persistent Memory: 22528
[11/14/2022-21:07:49] [TRT] [I] Total Scratch Memory: 811008
[11/14/2022-21:07:49] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 1964 MiB
[11/14/2022-21:07:49] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 1.4794ms to assign 4 blocks to 51 nodes requiring 417464320 bytes.
[11/14/2022-21:07:49] [TRT] [I] Total Activation Memory: 417464320
[11/14/2022-21:07:51] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-21:07:51] [TRT] [I] Total Host Persistent Memory: 81568
[11/14/2022-21:07:51] [TRT] [I] Total Device Persistent Memory: 22528
[11/14/2022-21:07:51] [TRT] [I] Total Scratch Memory: 811008
[11/14/2022-21:07:51] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 44 MiB, GPU 1964 MiB
[11/14/2022-21:07:51] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 1.497ms to assign 4 blocks to 51 nodes requiring 417464320 bytes.
[11/14/2022-21:07:51] [TRT] [I] Total Activation Memory: 417464320
[11/14/2022-21:07:51] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 2162, GPU 14688 (MiB)
[11/14/2022-21:07:51] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 2163, GPU 14688 (MiB)
[11/14/2022-21:07:51] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +39, GPU +32, now: CPU 39, GPU 32 (MiB)
[11/14/2022-21:07:51] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-14 21:07:51,473 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-14 21:07:51,473 builder.py:211 INFO] Layers:
Reformatting CopyNode for Input Tensor 0 to conv1 + scale_conv1 + conv1_relu + pool1
conv1 + scale_conv1 + conv1_relu + pool1
RES2_FULL_FUSION
res3a_branch2a + res3a_branch2a_relu
res3a_branch1
res3a_branch2b + res3a_branch2b_relu
SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu
res3b_branch2a + res3b_branch2a_relu
res3b_branch2b + res3b_branch2b_relu
SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu
res3c_branch2a + res3c_branch2a_relu
res3c_branch2b + res3c_branch2b_relu
SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu
res3d_branch2a + res3d_branch2a_relu
res3d_branch2b + res3d_branch2b_relu
SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu
res4a_branch2a + res4a_branch2a_relu
res4a_branch1
res4a_branch2b + res4a_branch2b_relu
SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu
res4b_branch2a + res4b_branch2a_relu
res4b_branch2b + res4b_branch2b_relu
SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu
res4c_branch2a + res4c_branch2a_relu
res4c_branch2b + res4c_branch2b_relu
SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu
res4d_branch2a + res4d_branch2a_relu
res4d_branch2b + res4d_branch2b_relu
SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu
res4e_branch2a + res4e_branch2a_relu
res4e_branch2b + res4e_branch2b_relu
SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu
res4f_branch2a + res4f_branch2a_relu
res4f_branch2b + res4f_branch2b_relu
SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu
res5a_branch1
res5a_branch2a + res5a_branch2a_relu
res5a_branch2b + res5a_branch2b_relu
SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu
res5b_branch2a + res5b_branch2a_relu
res5b_branch2b + res5b_branch2b_relu
SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu
res5c_branch2a + res5c_branch2a_relu
res5c_branch2b + res5c_branch2b_relu
SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu
squeeze_replaced
fc_replaced
Reformatting CopyNode for Input Tensor 0 to topk_layer
topk_layer
Reformatting CopyNode for Input Tensor 0 to conv1 + scale_conv1 + conv1_relu + pool1 [profile 1]
conv1 + scale_conv1 + conv1_relu + pool1 [profile 1]
RES2_FULL_FUSION [profile 1]
res3a_branch2a + res3a_branch2a_relu [profile 1]
res3a_branch1 [profile 1]
res3a_branch2b + res3a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu [profile 1]
res3b_branch2a + res3b_branch2a_relu [profile 1]
res3b_branch2b + res3b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu [profile 1]
res3c_branch2a + res3c_branch2a_relu [profile 1]
res3c_branch2b + res3c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu [profile 1]
res3d_branch2a + res3d_branch2a_relu [profile 1]
res3d_branch2b + res3d_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu [profile 1]
res4a_branch2a + res4a_branch2a_relu [profile 1]
res4a_branch1 [profile 1]
res4a_branch2b + res4a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu [profile 1]
res4b_branch2a + res4b_branch2a_relu [profile 1]
res4b_branch2b + res4b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu [profile 1]
res4c_branch2a + res4c_branch2a_relu [profile 1]
res4c_branch2b + res4c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu [profile 1]
res4d_branch2a + res4d_branch2a_relu [profile 1]
res4d_branch2b + res4d_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu [profile 1]
res4e_branch2a + res4e_branch2a_relu [profile 1]
res4e_branch2b + res4e_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu [profile 1]
res4f_branch2a + res4f_branch2a_relu [profile 1]
res4f_branch2b + res4f_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu [profile 1]
res5a_branch1 [profile 1]
res5a_branch2a + res5a_branch2a_relu [profile 1]
res5a_branch2b + res5a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu [profile 1]
res5b_branch2a + res5b_branch2a_relu [profile 1]
res5b_branch2b + res5b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu [profile 1]
res5c_branch2a + res5c_branch2a_relu [profile 1]
res5c_branch2b + res5c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu [profile 1]
squeeze_replaced [profile 1]
fc_replaced [profile 1]
Reformatting CopyNode for Input Tensor 0 to topk_layer [profile 1]
topk_layer [profile 1]
Bindings:
input_tensor_0
topk_layer_output_index
input_tensor_0 [profile 1]
topk_layer_output_index [profile 1]
[11/14/2022-21:07:51] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/14/2022-21:07:51] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-14 21:07:51,508 generate_engines.py:176 INFO] Finished building engines for resnet50 benchmark in Offline scenario.
Time taken to generate engines: 179.76668906211853 seconds
zack@zack-desktop:~/inference_results_v2.1/closed/NVIDIA$ make run_harness RUN_ARGS=“–benchmarks=resnet50 --scenarios=Offline --test_mode=PerformanceOnly --fast”
[2022-11-14 21:09:49,881 main_v2.py:221 INFO] Detected system ID: KnownSystem.Orin
[2022-11-14 21:09:50,413 generate_conf_files.py:103 INFO] Generated measurements/ entries for Orin_TRT/resnet50/Offline
[2022-11-14 21:09:50,414 init.py:44 INFO] Running command: ./build/bin/harness_default --logfile_outdir=“/home/zack/inference_results_v2.1/closed/NVIDIA/build/logs/2022.11.14-21.09.48/Orin_TRT/resnet50/Offline” --logfile_prefix=“mlperf_log_” --performance_sample_count=2048 --test_mode=“PerformanceOnly” --dla_batch_size=16 --dla_copy_streams=2 --dla_inference_streams=1 --gpu_copy_streams=2 --gpu_inference_streams=1 --use_direct_host_access=true --gpu_batch_size=256 --map_path=“data_maps/imagenet/val_map.txt” --tensor_path=“build/preprocessed_data/imagenet/ResNet50/int8_chw4/” --use_graphs=false --gpu_engines=“./build/engines/Orin/resnet50/Offline/resnet50-Offline-gpu-b256-int8.lwis_k_99_MaxP.plan” --mlperf_conf_path=“measurements/Orin_TRT/resnet50/Offline/mlperf.conf” --user_conf_path=“measurements/Orin_TRT/resnet50/Offline/user.conf” --dla_engines=“./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.lwis_k_99_MaxP.plan” --scenario Offline --model resnet50
[2022-11-14 21:09:50,414 init.py:51 INFO] Overriding Environment
benchmark : Benchmark.ResNet50
buffer_manager_thread_count : 0
data_dir : /home/zack/orin/scratch/data
dla_batch_size : 16
dla_copy_streams : 2
dla_inference_streams : 1
fast : True
gpu_batch_size : 256
gpu_copy_streams : 2
gpu_inference_streams : 1
input_dtype : int8
input_format : chw4
log_dir : /home/zack/inference_results_v2.1/closed/NVIDIA/build/logs/2022.11.14-21.09.48
map_path : data_maps/imagenet/val_map.txt
offline_expected_qps : 5700
precision : int8
preprocessed_data_dir : /home/zack/orin/scratch/preprocessed_data
scenario : Scenario.Offline
system : SystemConfiguration(host_cpu_conf=CPUConfiguration(layout={CPU(name=‘ARMv8 Processor rev 1 (v8l)’, architecture=<CPUArchitecture.aarch64: AliasedName(name=‘aarch64’, aliases=(), patterns=())>, core_count=4, threads_per_core=1): 3}), host_mem_conf=MemoryConfiguration(host_memory_capacity=Memory(quantity=31.940991999999998, byte_suffix=<ByteSuffix.GB: (1000, 3)>, _num_bytes=31940992000), comparison_tolerance=0.05), accelerator_conf=AcceleratorConfiguration(layout=defaultdict(<class ‘int’>, {GPU(name=‘Jetson AGX Orin’, accelerator_type=<AcceleratorType.Integrated: AliasedName(name=‘Integrated’, aliases=(), patterns=())>, vram=None, max_power_limit=None, pci_id=None, compute_sm=87): 1})), numa_conf=None, system_id=‘Orin’)
tensor_path : build/preprocessed_data/imagenet/ResNet50/int8_chw4/
test_mode : PerformanceOnly
use_direct_host_access : True
use_graphs : False
system_id : Orin
config_name : Orin_resnet50_Offline
workload_setting : WorkloadSetting(HarnessType.LWIS, AccuracyTarget.k_99, PowerSetting.MaxP)
optimization_level : plugin-enabled
use_cpu : False
use_inferentia : False
config_ver : lwis_k_99_MaxP
accuracy_level : 99%
inference_server : lwis
soc_gpu_freq : None
soc_dla_freq : None
soc_cpu_freq : None
soc_emc_freq : None
orin_num_cores : None
&&&& RUNNING Default_Harness # ./build/bin/harness_default
[I] mlperf.conf path: measurements/Orin_TRT/resnet50/Offline/mlperf.conf
[I] user.conf path: measurements/Orin_TRT/resnet50/Offline/user.conf
Creating QSL.
F1114 21:09:50.519011 5131 numpy.hpp:52] Check failed: m_FStream Unable to parse: build/preprocessed_data/imagenet/ResNet50/int8_chw4//ILSVRC2012_val_00000001.JPEG.npy
*** Check failure stack trace: ***
@ 0xffffb0a583c4 google::LogMessage::Fail()
@ 0xffffb0a582c8 google::LogMessage::SendToLog()
@ 0xffffb0a57be4 google::LogMessage::Flush()
@ 0xffffb0a5ae0c google::LogMessageFatal::~LogMessageFatal()
@ 0xaaaae9df5008 (unknown)
@ 0xaaaae9df7514 (unknown)
@ 0xaaaae9dd8280 (unknown)
@ 0xaaaae9dd57c0 (unknown)
@ 0xffffa1b10e10 __libc_start_main
@ 0xaaaae9dd5de4 (unknown)
Aborted (core dumped)
Traceback (most recent call last):
File “/usr/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/usr/lib/python3.8/runpy.py”, line 87, in run_code
exec(code, run_globals)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 223, in
main(main_args, DETECTED_SYSTEM)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 147, in main
dispatch_action(main_args, config_dict, workload_setting)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 194, in dispatch_action
handler.run()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py”, line 79, in run
self.handle_failure()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/run_harness.py”, line 221, in handle_failure
raise RuntimeError(“Run harness failed!”)
RuntimeError: Run harness failed!
Traceback (most recent call last):
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/run_harness.py”, line 204, in handle
result = self.harness.run_harness(flag_dict=self.harness_flag_dict, skip_generate_measurements=True)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/common/harness.py”, line 278, in run_harness
output = run_command(cmd, get_output=True, custom_env=self.env_vars)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/common/init.py”, line 65, in run_command
raise subprocess.CalledProcessError(ret, cmd)
subprocess.CalledProcessError: Command './build/bin/harness_default --logfile_outdir=“/home/zack/inference_results_v2.1/closed/NVIDIA/build/logs/2022.11.14-21.09.48/Orin_TRT/resnet50/Offline” --logfile_prefix="mlperf_log" --performance_sample_count=2048 --test_mode=“PerformanceOnly” --dla_batch_size=16 --dla_copy_streams=2 --dla_inference_streams=1 --gpu_copy_streams=2 --gpu_inference_streams=1 --use_direct_host_access=true --gpu_batch_size=256 --map_path=“data_maps/imagenet/val_map.txt” --tensor_path=“build/preprocessed_data/imagenet/ResNet50/int8_chw4/” --use_graphs=false --gpu_engines=“./build/engines/Orin/resnet50/Offline/resnet50-Offline-gpu-b256-int8.lwis_k_99_MaxP.plan” --mlperf_conf_path=“measurements/Orin_TRT/resnet50/Offline/mlperf.conf” --user_conf_path=“measurements/Orin_TRT/resnet50/Offline/user.conf” --dla_engines=“./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.lwis_k_99_MaxP.plan” --scenario Offline --model resnet50’ returned non-zero exit status 134.
make: *** [Makefile:702: run_harness] Error 1
Hello @113736752 ,
Try using this command below instead where I have replaced (run_harness → run).
make run RUN_ARGS="--benchmarks=resnet50 --scenarios=Offline --config_ver=triton --test_mode=PerformanceOnly"
I am not sure but you can try it :) Let me know about the results.
Harry
Hi @Harry-S Failed with following error:
zack@zack-desktop:~/inference_results_v2.1/closed/NVIDIA$ make run RUN_ARGS=“–benchmarks=resnet50 --scenarios=Offline --config_ver=triton --test_mode=PerformanceOnly”
make[1]: Entering directory ‘/home/zack/inference_results_v2.1/closed/NVIDIA’
[2022-11-14 22:07:55,169 main_v2.py:221 INFO] Detected system ID: KnownSystem.Orin
[2022-11-14 22:07:56,076 generate_engines.py:172 INFO] Building engines for resnet50 benchmark in Offline scenario…
[2022-11-14 22:07:56,105 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/14/2022-22:07:56] [TRT] [I] [MemUsageChange] Init CUDA: CPU +215, GPU +0, now: CPU 268, GPU 12992 (MiB)
[11/14/2022-22:07:57] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +346, GPU +343, now: CPU 636, GPU 13358 (MiB)
[2022-11-14 22:07:57,981 builder.py:107 INFO] Using DLA: Core 0
[2022-11-14 22:07:58,247 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-14 22:07:58,247 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-14 22:07:58,247 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-14 22:07:58,247 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-14 22:07:58,249 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-14 22:07:58,249 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-14 22:07:58,321 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/14/2022-22:07:58] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-14 22:07:58,322 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-dla-b16-int8.triton_k_99_MaxP.plan
[11/14/2022-22:07:58] [TRT] [W] Layer ‘topk_layer’: Unsupported on DLA. Switching this layer’s device type to GPU.
[11/14/2022-22:07:58] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/14/2022-22:07:58] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/14/2022-22:07:58] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/14/2022-22:08:02] [TRT] [I] ---------- Layers Running on DLA ----------
[11/14/2022-22:08:02] [TRT] [I] [DlaLayer] {ForeignNode[conv1 + scale_conv1…fc_replaced]}
[11/14/2022-22:08:02] [TRT] [I] ---------- Layers Running on GPU ----------
[11/14/2022-22:08:02] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/14/2022-22:08:02] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +369, now: CPU 1662, GPU 14894 (MiB)
[11/14/2022-22:08:03] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +86, GPU +81, now: CPU 1748, GPU 14975 (MiB)
[11/14/2022-22:08:03] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/14/2022-22:08:07] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-22:08:08] [TRT] [I] Total Host Persistent Memory: 48
[11/14/2022-22:08:08] [TRT] [I] Total Device Persistent Memory: 0
[11/14/2022-22:08:08] [TRT] [I] Total Scratch Memory: 1024
[11/14/2022-22:08:08] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 31 MiB
[11/14/2022-22:08:08] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 0.01376ms to assign 3 blocks to 4 nodes requiring 80896 bytes.
[11/14/2022-22:08:08] [TRT] [I] Total Activation Memory: 80896
[11/14/2022-22:08:08] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +25, GPU +4, now: CPU 25, GPU 4 (MiB)
[11/14/2022-22:08:08] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-14 22:08:08,393 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-14 22:08:08,393 builder.py:211 INFO] Layers:
{ForeignNode[conv1 + scale_conv1…fc_replaced]}
Reformatting CopyNode for Input Tensor 0 to topk_layer
topk_layer
Bindings:
input_tensor_0
topk_layer_output_index
[11/14/2022-22:08:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/14/2022-22:08:08] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-14 22:08:08,406 ResNet50.py:36 INFO] Using workspace size: 1073741824
[11/14/2022-22:08:08] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1461, GPU 14416 (MiB)
[2022-11-14 22:08:08,790 rn50_graphsurgeon.py:474 INFO] Renaming layers
[2022-11-14 22:08:08,791 rn50_graphsurgeon.py:485 INFO] Renaming tensors
[2022-11-14 22:08:08,791 rn50_graphsurgeon.py:834 INFO] Adding Squeeze
[2022-11-14 22:08:08,791 rn50_graphsurgeon.py:869 INFO] Adding Conv layer, instead of FC
[2022-11-14 22:08:08,793 rn50_graphsurgeon.py:890 INFO] Adding TopK layer
[2022-11-14 22:08:08,793 rn50_graphsurgeon.py:907 INFO] Removing obsolete layers
[2022-11-14 22:08:08,795 rn50_graphsurgeon.py:580 INFO] Fusing ops in res2_mega
[2022-11-14 22:08:08,797 rn50_graphsurgeon.py:693 INFO] Plugin RES2_FULL_FUSION successful
[2022-11-14 22:08:08,798 rn50_graphsurgeon.py:499 INFO] Replacing all branch2c beta=1 conv with smallk kernel.
[2022-11-14 22:08:08,798 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,798 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,798 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,798 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,799 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,799 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,799 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,799 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,799 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,800 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,800 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,800 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,800 rn50_graphsurgeon.py:566 INFO] Fusing SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu with smallk…
[2022-11-14 22:08:08,802 rn50_graphsurgeon.py:573 INFO] Plugin SmallTileGEMM_TRT fused successful for res3/4/5 branch2c
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: RnRes2FullFusion_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: RnRes2FullFusion_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: RnRes2FullFusion_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[11/14/2022-22:08:08] [TRT] [I] No importer registered for op: SmallTileGEMM_TRT. Attempting to import as plugin.
[11/14/2022-22:08:08] [TRT] [I] Searching for plugin: SmallTileGEMM_TRT, plugin_version: 1, plugin_namespace:
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBias not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasRelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasGelu not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [W] builtin_op_importers.cpp:5123: Attribute epilogueScaleBiasBeta not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[11/14/2022-22:08:08] [TRT] [I] Successfully created plugin: SmallTileGEMM_TRT
[2022-11-14 22:08:08,924 ResNet50.py:94 INFO] Unmarking output: topk_layer_output_value
[11/14/2022-22:08:08] [TRT] [W] DynamicRange(min: -128, max: 127). Dynamic range should be symmetric for better accuracy.
[2022-11-14 22:08:08,924 builder.py:177 INFO] Building ./build/engines/Orin/resnet50/Offline/resnet50-Offline-gpu-b256-int8.triton_k_99_MaxP.plan
[11/14/2022-22:08:08] [TRT] [W] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[11/14/2022-22:08:08] [TRT] [I] Reading Calibration Cache for calibrator: EntropyCalibration2
[11/14/2022-22:08:08] [TRT] [I] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[11/14/2022-22:08:08] [TRT] [I] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[11/14/2022-22:08:08] [TRT] [I] ---------- Layers Running on DLA ----------
[11/14/2022-22:08:08] [TRT] [I] ---------- Layers Running on GPU ----------
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONV_ACT_POOL: conv1 + scale_conv1 + conv1_relu + pool1
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: RES2_FULL_FUSION
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch2a + res3a_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch1
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3a_branch2b + res3a_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3b_branch2a + res3b_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3b_branch2b + res3b_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3c_branch2a + res3c_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3c_branch2b + res3c_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3d_branch2a + res3d_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res3d_branch2b + res3d_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch2a + res4a_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch1
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4a_branch2b + res4a_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4b_branch2a + res4b_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4b_branch2b + res4b_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4c_branch2a + res4c_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4c_branch2b + res4c_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4d_branch2a + res4d_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4d_branch2b + res4d_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4e_branch2a + res4e_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4e_branch2b + res4e_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4f_branch2a + res4f_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res4f_branch2b + res4f_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch1
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch2a + res5a_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5a_branch2b + res5a_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5b_branch2a + res5b_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5b_branch2b + res5b_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5c_branch2a + res5c_branch2a_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: res5c_branch2b + res5c_branch2b_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] PLUGIN_V2: SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] POOLING: squeeze_replaced
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] CONVOLUTION: fc_replaced
[11/14/2022-22:08:08] [TRT] [I] [GpuLayer] TOPK: topk_layer
[11/14/2022-22:08:08] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1857, GPU 14702 (MiB)
[11/14/2022-22:08:08] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1857, GPU 14702 (MiB)
[11/14/2022-22:08:08] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[11/14/2022-22:08:48] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[11/14/2022-22:10:50] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-22:10:50] [TRT] [I] Total Host Persistent Memory: 78496
[11/14/2022-22:10:50] [TRT] [I] Total Device Persistent Memory: 22528
[11/14/2022-22:10:50] [TRT] [I] Total Scratch Memory: 811008
[11/14/2022-22:10:50] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 26 MiB, GPU 1964 MiB
[11/14/2022-22:10:50] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 1.45947ms to assign 4 blocks to 51 nodes requiring 417464320 bytes.
[11/14/2022-22:10:50] [TRT] [I] Total Activation Memory: 417464320
[11/14/2022-22:10:52] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[11/14/2022-22:10:52] [TRT] [I] Total Host Persistent Memory: 78496
[11/14/2022-22:10:52] [TRT] [I] Total Device Persistent Memory: 22528
[11/14/2022-22:10:52] [TRT] [I] Total Scratch Memory: 811008
[11/14/2022-22:10:52] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 44 MiB, GPU 1964 MiB
[11/14/2022-22:10:52] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 1.52743ms to assign 4 blocks to 51 nodes requiring 417464320 bytes.
[11/14/2022-22:10:52] [TRT] [I] Total Activation Memory: 417464320
[11/14/2022-22:10:52] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 2162, GPU 14926 (MiB)
[11/14/2022-22:10:52] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 2163, GPU 14926 (MiB)
[11/14/2022-22:10:52] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +39, GPU +32, now: CPU 39, GPU 32 (MiB)
[11/14/2022-22:10:52] [TRT] [I] The profiling verbosity was set to ProfilingVerbosity::kLAYER_NAMES_ONLY when the engine was built, so only the layer names will be returned. Rebuild the engine with ProfilingVerbosity::kDETAILED to get more verbose layer information.
[2022-11-14 22:10:52,840 builder.py:210 INFO] ========= TensorRT Engine Layer Information =========
[2022-11-14 22:10:52,840 builder.py:211 INFO] Layers:
Reformatting CopyNode for Input Tensor 0 to conv1 + scale_conv1 + conv1_relu + pool1
conv1 + scale_conv1 + conv1_relu + pool1
RES2_FULL_FUSION
res3a_branch2a + res3a_branch2a_relu
res3a_branch1
res3a_branch2b + res3a_branch2b_relu
SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu
res3b_branch2a + res3b_branch2a_relu
res3b_branch2b + res3b_branch2b_relu
SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu
res3c_branch2a + res3c_branch2a_relu
res3c_branch2b + res3c_branch2b_relu
SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu
res3d_branch2a + res3d_branch2a_relu
res3d_branch2b + res3d_branch2b_relu
SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu
res4a_branch2a + res4a_branch2a_relu
res4a_branch1
res4a_branch2b + res4a_branch2b_relu
SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu
res4b_branch2a + res4b_branch2a_relu
res4b_branch2b + res4b_branch2b_relu
SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu
res4c_branch2a + res4c_branch2a_relu
res4c_branch2b + res4c_branch2b_relu
SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu
res4d_branch2a + res4d_branch2a_relu
res4d_branch2b + res4d_branch2b_relu
SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu
res4e_branch2a + res4e_branch2a_relu
res4e_branch2b + res4e_branch2b_relu
SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu
res4f_branch2a + res4f_branch2a_relu
res4f_branch2b + res4f_branch2b_relu
SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu
res5a_branch1
res5a_branch2a + res5a_branch2a_relu
res5a_branch2b + res5a_branch2b_relu
SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu
res5b_branch2a + res5b_branch2a_relu
res5b_branch2b + res5b_branch2b_relu
SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu
res5c_branch2a + res5c_branch2a_relu
res5c_branch2b + res5c_branch2b_relu
SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu
squeeze_replaced
fc_replaced
Reformatting CopyNode for Input Tensor 0 to topk_layer
topk_layer
Reformatting CopyNode for Input Tensor 0 to conv1 + scale_conv1 + conv1_relu + pool1 [profile 1]
conv1 + scale_conv1 + conv1_relu + pool1 [profile 1]
RES2_FULL_FUSION [profile 1]
res3a_branch2a + res3a_branch2a_relu [profile 1]
res3a_branch1 [profile 1]
res3a_branch2b + res3a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3a_branch2c_conv_residual_relu [profile 1]
res3b_branch2a + res3b_branch2a_relu [profile 1]
res3b_branch2b + res3b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3b_branch2c_conv_residual_relu [profile 1]
res3c_branch2a + res3c_branch2a_relu [profile 1]
res3c_branch2b + res3c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3c_branch2c_conv_residual_relu [profile 1]
res3d_branch2a + res3d_branch2a_relu [profile 1]
res3d_branch2b + res3d_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res3d_branch2c_conv_residual_relu [profile 1]
res4a_branch2a + res4a_branch2a_relu [profile 1]
res4a_branch1 [profile 1]
res4a_branch2b + res4a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4a_branch2c_conv_residual_relu [profile 1]
res4b_branch2a + res4b_branch2a_relu [profile 1]
res4b_branch2b + res4b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4b_branch2c_conv_residual_relu [profile 1]
res4c_branch2a + res4c_branch2a_relu [profile 1]
res4c_branch2b + res4c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4c_branch2c_conv_residual_relu [profile 1]
res4d_branch2a + res4d_branch2a_relu [profile 1]
res4d_branch2b + res4d_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4d_branch2c_conv_residual_relu [profile 1]
res4e_branch2a + res4e_branch2a_relu [profile 1]
res4e_branch2b + res4e_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4e_branch2c_conv_residual_relu [profile 1]
res4f_branch2a + res4f_branch2a_relu [profile 1]
res4f_branch2b + res4f_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res4f_branch2c_conv_residual_relu [profile 1]
res5a_branch1 [profile 1]
res5a_branch2a + res5a_branch2a_relu [profile 1]
res5a_branch2b + res5a_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5a_branch2c_conv_residual_relu [profile 1]
res5b_branch2a + res5b_branch2a_relu [profile 1]
res5b_branch2b + res5b_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5b_branch2c_conv_residual_relu [profile 1]
res5c_branch2a + res5c_branch2a_relu [profile 1]
res5c_branch2b + res5c_branch2b_relu [profile 1]
SmallTileGEMM_TRT_res5c_branch2c_conv_residual_relu [profile 1]
squeeze_replaced [profile 1]
fc_replaced [profile 1]
Reformatting CopyNode for Input Tensor 0 to topk_layer [profile 1]
topk_layer [profile 1]
Bindings:
input_tensor_0
topk_layer_output_index
input_tensor_0 [profile 1]
topk_layer_output_index [profile 1]
[11/14/2022-22:10:52] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[11/14/2022-22:10:52] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
[2022-11-14 22:10:52,871 generate_engines.py:176 INFO] Finished building engines for resnet50 benchmark in Offline scenario.
Time taken to generate engines: 176.79140067100525 seconds
make[1]: Leaving directory ‘/home/zack/inference_results_v2.1/closed/NVIDIA’
make[1]: Entering directory ‘/home/zack/inference_results_v2.1/closed/NVIDIA’
[2022-11-14 22:10:55,859 main_v2.py:221 INFO] Detected system ID: KnownSystem.Orin
Traceback (most recent call last):
File “/usr/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/usr/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 223, in
main(main_args, DETECTED_SYSTEM)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 147, in main
dispatch_action(main_args, config_dict, workload_setting)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/main_v2.py”, line 194, in dispatch_action
handler.run()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py”, line 75, in run
self.setup()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/run_harness.py”, line 83, in setup
self.run()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/base.py”, line 75, in run
self.setup()
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/actionhandler/generate_conf_files.py”, line 66, in setup
self.harness, self.benchmark_conf = get_harness(self.benchmark_conf, None)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/init.py”, line 124, in get_harness
harness = get_cls(G_HARNESS_CLASS_MAP[k])(config, benchmark)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/common/server_harness.py”, line 145, in init
super().init(args, benchmark)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/common/harness.py”, line 88, in init
self.check_file_exists(self.executable)
File “/home/zack/inference_results_v2.1/closed/NVIDIA/code/common/harness.py”, line 161, in check_file_exists
raise RuntimeError(“File {:} does not exist.”.format(f))
RuntimeError: File ./build/bin/harness_triton does not exist.
make[1]: *** [Makefile:702: run_harness] Error 1
make[1]: Leaving directory ‘/home/zack/inference_results_v2.1/closed/NVIDIA’
make: *** [Makefile:689: run] Error 2
I think you have the same error as I got using the JetPack 5.0.2 which is the plugins. I am not sure about that but in your both logs you can find that the plugins are not found in [W] WARNING lines.
plugins:
Now I have used JetPack 5.0.2 where the same error is located on the line 4714 in builtin_op_importers.cpp
And apparently using the CUDA-X the same error is located on the line 5123 in builtin_op_importers.cpp
I am not sure if it is related to CUDA-X or JetPack but it seems to me the same error.
EDIT:
I do not think it has to do something but please try the same command without --config_ver=triton
I put it by mistake in the command above 😁
Harry
if you would like try to run :
make run RUN_ARGS="--benchmarks=resnet50 --scenarios=Offline --test_mode=AccuracyOnly"
And then you can compare with my results above to see if you got the same error log. to see if the CUDA-X had change anything or not concerning the plugins.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.