Get wrong results on DLA

Device: Jetson AGX Orin 64Gb
Environment:
L4T 35.4.1
Jetpack 5.1.2
cudla 3.12.1

use dla hybrid mode to inference on cudla
model has 1 model input with shape of 8x3x352x640, 5 model outputs with shapes of 8xKx88x160, K=3, 8, 5, 9, 10, input & output format is fp16:dla_linear, just convert uint8 image to fp16 registered gpu buffers and I get output by directly converting fp16 buffers to fp32.

When I run the model with fp16 mode on DLA, I get the right results with cosine similarity close to 1.0.

When I run the model with int8 mode on DLA (the first two conv layer and header run with fp16 mode, only the backbone of the model runs with int8 mode), only the result of first batch is correct, and the rest is totally different.

Actually, the input data is an image (1x3x352x640) repeating 8 times, the result of each batch should be the same.

Does anyone have meet this problem and know how to deal with it ?

Thanks!

Hi,

Could you share your source code so we can check if anything missing in the implementation?
Thanks

Sorry, I can’t share full source code. Here is part of the convert and deploy code, could you please check with it. If anymore info is need, I’ll try to provide desensitized data.

The dla model convert scripts:

trtexec --onnx=${MODEL}_noqdq.onnx \
        --calib=${MODEL}_precision_config_calib.cache \
        --useDLACore=0 \
        --int8 \
        --fp16 \
        --saveEngine=model_int8_linear_in_linear_out.dla \
        --precisionConstraints=obey \
        --layerPrecisions=$(cat ${MODEL}_precision_config_layer_arg.txt) \
	    --buildDLAStandalone \
	    --verbose \
        --inputIOFormats=fp16:dla_linear --outputIOFormats=fp16:dla_linear \
	    > dla_int8_linear.log 2>&1

The deploy code to copy uint8 nchw image to half dla registered input buffers

void DLARuntime::getDLAInputBuffers() {
    for (int i = 0; i<num_inputs_; ++i) {
        int n = input_tensor_descs_[i].n;
        int h = input_tensor_descs_[i].h;
        int w = input_tensor_descs_[i].w;
        int c = input_tensor_descs_[i].c;

        ConvertUint8ToHalf(static_cast<__half*>(input_bufs_[i]), 
                           static_cast<uint8_t*>(input_tensors_[i].tensor->data()), 
                           input_tensors_[i].tensor->dataLen(), stream_);
    }
}

Here input_bufs_ is gpu buffers registered for DLA, input_tensors_ is a kind of data structure similar to DLManagedTensor.

The deploy to copy half dla registered output buffer to float buffers

void DLARuntime::getDLAOutputBuffers() {
    for (int i = 0; i<num_outputs_; ++i) {
        int n = output_tensor_descs_[i].n;
        int h = output_tensor_descs_[i].h;
        int w = output_tensor_descs_[i].w;
        int c = output_tensor_descs_[i].c;

        ConvertHalfToFloat(static_cast<float*>(output_tensors_[i].tensor->data()), 
                            static_cast<__half*>(output_bufs_[i]), 
                            output_tensors_[i].tensor->dataLen(), stream_);
    }
}

The rest DLA deploy code is similar to cuDLA-sample git repo.

Thanks

Hi,

How about the ConvertUint8ToHalf and ConvertHalfToFloat function?
Could you share your implmentation?

Below blog shows some details about our INT8 representation:

Thanks.

Thanks for your help!

I think I have solved this problem. Previous DLA model computes concat op in int8 mode. I change all concat op to fp16 mode and the results of int8 DLA model are totally consistent between batches.

But I still feel confused that the results of fp16 DLA model are not totally consistent between batches and why this would affect output between batches.

Here is the convert script:

trtexec --minShapes=img:8x3x352x640 \
        --maxShapes=img:8x3x352x640 \
        --optShapes=img:8x3x352x640 \
        --shapes=img:8x3x352x640 \
        --onnx=${MODEL}_noqdq.onnx \
        --useDLACore=0 \
        --buildDLAStandalone \
        --saveEngine=${WORKDIR}/model.int8.fp16.linear.in.fp16.linear.out.standalone.bin  \
        --inputIOFormats=fp16:dla_linear \
        --outputIOFormats=fp16:dla_linear \
        --int8 --fp16 \
        --verbose \
        --calib=${MODEL}_precision_config_calib.cache \
        --precisionConstraints=obey \
        --layerPrecisions=$(cat ${MODEL}_precision_config_layer_arg.txt) \
        > dla_int8_linear.log 2>&1

Here is the convert log

&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --minShapes=img:8x3x352x640 --maxShapes=img:8x3x352x640 --optShapes=img:8x3x352x640 --shapes=img:8x3x352x640 --onnx=model_dync_bs1/TL_FULL_XG_qat_simplified_modified_noqdq.onnx --useDLACore=0 --buildDLAStandalone --saveEngine=model_dync_bs1/model.fp16.linearin.fp16linearout.standalone.bin --inputIOFormats=fp16:dla_linear --outputIOFormats=fp16:dla_linear --int8 --fp16 --verbose --calib=model_dync_bs1/TL_FULL_XG_qat_simplified_modified_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=/normalize_input/Conv:fp16,
[06/13/2024-10:16:05] [I] === Model Options ===
[06/13/2024-10:16:05] [I] Format: ONNX
[06/13/2024-10:16:05] [I] Model: model_dync_bs1/TL_FULL_XG_qat_simplified_modified_noqdq.onnx
[06/13/2024-10:16:05] [I] Output:
[06/13/2024-10:16:05] [I] === Build Options ===
[06/13/2024-10:16:05] [I] Max batch: explicit batch
[06/13/2024-10:16:05] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[06/13/2024-10:16:05] [I] minTiming: 1
[06/13/2024-10:16:05] [I] avgTiming: 8
[06/13/2024-10:16:05] [I] Precision: FP32+FP16+INT8 (obey precision constraints)
[06/13/2024-10:16:05] [I] LayerPrecisions: /normalize_input/Conv:fp16
[06/13/2024-10:16:05] [I] Calibration: model_dync_bs1/TL_FULL_XG_qat_simplified_modified_precision_config_calib.cache
[06/13/2024-10:16:05] [I] Refit: Disabled
[06/13/2024-10:16:05] [I] Sparsity: Disabled
[06/13/2024-10:16:05] [I] Safe mode: Disabled
[06/13/2024-10:16:05] [I] DirectIO mode: Disabled
[06/13/2024-10:16:05] [I] Restricted mode: Disabled
[06/13/2024-10:16:05] [I] Build only: Enabled
[06/13/2024-10:16:05] [I] Save engine: model_dync_bs1/model.fp16.linearin.fp16linearout.standalone.bin
[06/13/2024-10:16:05] [I] Load engine: 
[06/13/2024-10:16:05] [I] Profiling verbosity: 0
[06/13/2024-10:16:05] [I] Tactic sources: Using default tactic sources
[06/13/2024-10:16:05] [I] timingCacheMode: local
[06/13/2024-10:16:05] [I] timingCacheFile: 
[06/13/2024-10:16:05] [I] Heuristic: Disabled
[06/13/2024-10:16:05] [I] Preview Features: Use default preview flags.
[06/13/2024-10:16:05] [I] Input(s): fp16:+dla_linear
[06/13/2024-10:16:05] [I] Output(s): fp16:+dla_linear
[06/13/2024-10:16:05] [I] Input build shape: img=8x3x352x640+8x3x352x640+8x3x352x640
[06/13/2024-10:16:05] [I] Input calibration shape: img=8x3x352x640+8x3x352x640+8x3x352x640
[06/13/2024-10:16:05] [I] === System Options ===
[06/13/2024-10:16:05] [I] Device: 0
[06/13/2024-10:16:05] [I] DLACore: 0
[06/13/2024-10:16:05] [I] Plugins:
[06/13/2024-10:16:05] [I] === Inference Options ===
[06/13/2024-10:16:05] [I] Batch: Explicit
[06/13/2024-10:16:05] [I] Input inference shape: img=8x3x352x640
[06/13/2024-10:16:05] [I] Iterations: 10
[06/13/2024-10:16:05] [I] Duration: 3s (+ 200ms warm up)
[06/13/2024-10:16:05] [I] Sleep time: 0ms
[06/13/2024-10:16:05] [I] Idle time: 0ms
[06/13/2024-10:16:05] [I] Streams: 1
[06/13/2024-10:16:05] [I] ExposeDMA: Disabled
[06/13/2024-10:16:05] [I] Data transfers: Enabled
[06/13/2024-10:16:05] [I] Spin-wait: Disabled
[06/13/2024-10:16:05] [I] Multithreading: Disabled
[06/13/2024-10:16:05] [I] CUDA Graph: Disabled
[06/13/2024-10:16:05] [I] Separate profiling: Disabled
[06/13/2024-10:16:05] [I] Time Deserialize: Disabled
[06/13/2024-10:16:05] [I] Time Refit: Disabled
[06/13/2024-10:16:05] [I] NVTX verbosity: 0
[06/13/2024-10:16:05] [I] Persistent Cache Ratio: 0
[06/13/2024-10:16:05] [I] Inputs:
[06/13/2024-10:16:05] [I] === Reporting Options ===
[06/13/2024-10:16:05] [I] Verbose: Enabled
[06/13/2024-10:16:05] [I] Averages: 10 inferences
[06/13/2024-10:16:05] [I] Percentiles: 90,95,99
[06/13/2024-10:16:05] [I] Dump refittable layers:Disabled
[06/13/2024-10:16:05] [I] Dump output: Disabled
[06/13/2024-10:16:05] [I] Profile: Disabled
[06/13/2024-10:16:05] [I] Export timing to JSON file: 
[06/13/2024-10:16:05] [I] Export output to JSON file: 
[06/13/2024-10:16:05] [I] Export profile to JSON file: 
[06/13/2024-10:16:05] [I] 
[06/13/2024-10:16:05] [I] === Device Information ===
[06/13/2024-10:16:05] [I] Selected Device: Orin
[06/13/2024-10:16:05] [I] Compute Capability: 8.7
[06/13/2024-10:16:05] [I] SMs: 8
[06/13/2024-10:16:05] [I] Compute Clock Rate: 1.3 GHz
[06/13/2024-10:16:05] [I] Device Global Memory: 30592 MiB
[06/13/2024-10:16:05] [I] Shared Memory per SM: 164 KiB
[06/13/2024-10:16:05] [I] Memory Bus Width: 256 bits (ECC disabled)
[06/13/2024-10:16:05] [I] Memory Clock Rate: 0.612 GHz
[06/13/2024-10:16:05] [I] 
[06/13/2024-10:16:05] [I] TensorRT version: 8.5.2
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::CoordConvAC version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::CropAndResizeDynamic version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::DecodeBbox3DPlugin version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::GenerateDetection_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::GroupNorm version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::LayerNorm version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::MultiscaleDeformableAttnPlugin_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::NMSDynamic_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::PillarScatterPlugin version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::ProposalDynamic version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Proposal version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::ROIAlign_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::ScatterND version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::SeqLen2Spatial version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::SplitGeLU version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::Split version 1
[06/13/2024-10:16:05] [V] [TRT] Registered plugin creator - ::VoxelGeneratorPlugin version 1
[06/13/2024-10:16:06] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 7620 (MiB)
[06/13/2024-10:16:06] [V] [TRT] Trying to load shared library libnvinfer_builder_resource.so.8.5.2
[06/13/2024-10:16:06] [V] [TRT] Loaded shared library libnvinfer_builder_resource.so.8.5.2
[06/13/2024-10:16:09] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +418, now: CPU 574, GPU 8058 (MiB)
[06/13/2024-10:16:09] [I] Start parsing network model
[06/13/2024-10:16:09] [I] [TRT] ----------------------------------------------------------------
[06/13/2024-10:16:09] [I] [TRT] Input filename:   model_dync_bs1/TL_FULL_XG_qat_simplified_modified_noqdq.onnx
[06/13/2024-10:16:09] [I] [TRT] ONNX IR version:  0.0.9
[06/13/2024-10:16:09] [I] [TRT] Opset version:    13
[06/13/2024-10:16:09] [I] [TRT] Producer name:    
[06/13/2024-10:16:09] [I] [TRT] Producer version: 
[06/13/2024-10:16:09] [I] [TRT] Domain:           
[06/13/2024-10:16:09] [I] [TRT] Model version:    0
[06/13/2024-10:16:09] [I] [TRT] Doc string:       
[06/13/2024-10:16:09] [I] [TRT] ----------------------------------------------------------------
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::BatchTilePlugin_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::CoordConvAC version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::CropAndResizeDynamic version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::DecodeBbox3DPlugin version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_Explicit_TF_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_Implicit_TF_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::GenerateDetection_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::GroupNorm version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 2
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::LayerNorm version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::MultilevelCropAndResize_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::MultilevelProposeROI_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::MultiscaleDeformableAttnPlugin_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::NMSDynamic_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::PillarScatterPlugin version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::ProposalDynamic version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Proposal version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::ROIAlign_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::ScatterND version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::SeqLen2Spatial version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::SplitGeLU version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::Split version 1
[06/13/2024-10:16:09] [V] [TRT] Plugin creator already registered - ::VoxelGeneratorPlugin version 1
[06/13/2024-10:16:09] [V] [TRT] Adding network input: img with dtype: float32, dimensions: (-1, 3, 352, 640)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: img for ONNX tensor: img
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: normalize_input.weight
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: normalize_input.bias
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1775
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_792
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1786
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_795
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1797
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_798
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1808
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_801
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1830
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_807
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1819
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_804
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1842
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_810
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1853
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_813
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1866
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_816
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1888
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_822
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1910
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_828
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1899
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_825
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1922
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_831
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1933
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_834
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: onnx::QuantizeLinear_1946
[06/13/2024-10:16:09] [V] [TRT] Importing initializer: _v_837
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /normalize_input/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: img
[06/13/2024-10:16:09] [V] [TRT] Searching for input: normalize_input.weight
[06/13/2024-10:16:09] [V] [TRT] Searching for input: normalize_input.bias
[06/13/2024-10:16:09] [V] [TRT] /normalize_input/Conv [Conv] inputs: [img -> (-1, 3, 352, 640)[FLOAT]], [normalize_input.weight -> (3, 1, 1, 1)[FLOAT]], [normalize_input.bias -> (3)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 3, 352, 640)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /normalize_input/Conv for ONNX node: /normalize_input/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 3
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 3, 352, 640)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /normalize_input/Conv_output_0 for ONNX tensor: /normalize_input/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /normalize_input/Conv [Conv] outputs: [/normalize_input/Conv_output_0 -> (-1, 3, 352, 640)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_base_layer_0/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /normalize_input/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1775
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_792
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_base_layer_0/Conv [Conv] inputs: [/normalize_input/Conv_output_0 -> (-1, 3, 352, 640)[FLOAT]], [onnx::QuantizeLinear_1775 -> (16, 3, 3, 3)[FLOAT]], [_v_792 -> (16)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 3, 352, 640)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_base_layer_0/Conv for ONNX node: /backbone_base_base_layer_0/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 16
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 16, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_base_layer_0/Conv_output_0 for ONNX tensor: /backbone_base_base_layer_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_base_layer_0/Conv [Conv] outputs: [/backbone_base_base_layer_0/Conv_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_base_layer_0/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_base_layer_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_base_layer_0/Relu [Relu] inputs: [/backbone_base_base_layer_0/Conv_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_base_layer_0/Relu for ONNX node: /backbone_base_base_layer_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_base_layer_0/Relu_output_0 for ONNX tensor: /backbone_base_base_layer_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_base_layer_0/Relu [Relu] outputs: [/backbone_base_base_layer_0/Relu_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level0_0/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_base_layer_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1786
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_795
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level0_0/Conv [Conv] inputs: [/backbone_base_base_layer_0/Relu_output_0 -> (-1, 16, 176, 320)[FLOAT]], [onnx::QuantizeLinear_1786 -> (16, 16, 3, 3)[FLOAT]], [_v_795 -> (16)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 16, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level0_0/Conv for ONNX node: /backbone_base_level0_0/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 16
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 16, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level0_0/Conv_output_0 for ONNX tensor: /backbone_base_level0_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level0_0/Conv [Conv] outputs: [/backbone_base_level0_0/Conv_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level0_0/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level0_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level0_0/Relu [Relu] inputs: [/backbone_base_level0_0/Conv_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level0_0/Relu for ONNX node: /backbone_base_level0_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level0_0/Relu_output_0 for ONNX tensor: /backbone_base_level0_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level0_0/Relu [Relu] outputs: [/backbone_base_level0_0/Relu_output_0 -> (-1, 16, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level1_0/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level0_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1797
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_798
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level1_0/Conv [Conv] inputs: [/backbone_base_level0_0/Relu_output_0 -> (-1, 16, 176, 320)[FLOAT]], [onnx::QuantizeLinear_1797 -> (32, 16, 3, 3)[FLOAT]], [_v_798 -> (32)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 16, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level1_0/Conv for ONNX node: /backbone_base_level1_0/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 32
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 32, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level1_0/Conv_output_0 for ONNX tensor: /backbone_base_level1_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level1_0/Conv [Conv] outputs: [/backbone_base_level1_0/Conv_output_0 -> (-1, 32, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level1_0/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level1_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level1_0/Relu [Relu] inputs: [/backbone_base_level1_0/Conv_output_0 -> (-1, 32, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level1_0/Relu for ONNX node: /backbone_base_level1_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level1_0/Relu_output_0 for ONNX tensor: /backbone_base_level1_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level1_0/Relu [Relu] outputs: [/backbone_base_level1_0/Relu_output_0 -> (-1, 32, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree1_conv1/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level1_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1808
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_801
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv1/Conv [Conv] inputs: [/backbone_base_level1_0/Relu_output_0 -> (-1, 32, 176, 320)[FLOAT]], [onnx::QuantizeLinear_1808 -> (48, 32, 3, 3)[FLOAT]], [_v_801 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 32, 176, 320)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree1_conv1/Conv for ONNX node: /backbone_base_level2_tree1_conv1/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree1_conv1/Conv_output_0 for ONNX tensor: /backbone_base_level2_tree1_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv1/Conv [Conv] outputs: [/backbone_base_level2_tree1_conv1/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_downsample/MaxPool [MaxPool]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level1_0/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_downsample/MaxPool [MaxPool] inputs: [/backbone_base_level1_0/Relu_output_0 -> (-1, 32, 176, 320)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_downsample/MaxPool for ONNX node: /backbone_base_level2_downsample/MaxPool
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_downsample/MaxPool_output_0 for ONNX tensor: /backbone_base_level2_downsample/MaxPool_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_downsample/MaxPool [MaxPool] outputs: [/backbone_base_level2_downsample/MaxPool_output_0 -> (-1, 32, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree1_conv1/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv1/Relu [Relu] inputs: [/backbone_base_level2_tree1_conv1/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree1_conv1/Relu for ONNX node: /backbone_base_level2_tree1_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree1_conv1/Relu_output_0 for ONNX tensor: /backbone_base_level2_tree1_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv1/Relu [Relu] outputs: [/backbone_base_level2_tree1_conv1/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_project_0/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_downsample/MaxPool_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1830
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_807
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_project_0/Conv [Conv] inputs: [/backbone_base_level2_downsample/MaxPool_output_0 -> (-1, 32, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1830 -> (48, 32, 1, 1)[FLOAT]], [_v_807 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 32, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_project_0/Conv for ONNX node: /backbone_base_level2_project_0/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_project_0/Conv_output_0 for ONNX tensor: /backbone_base_level2_project_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_project_0/Conv [Conv] outputs: [/backbone_base_level2_project_0/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree1_conv2/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1819
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_804
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv2/Conv [Conv] inputs: [/backbone_base_level2_tree1_conv1/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1819 -> (48, 48, 3, 3)[FLOAT]], [_v_804 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree1_conv2/Conv for ONNX node: /backbone_base_level2_tree1_conv2/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree1_conv2/Conv_output_0 for ONNX tensor: /backbone_base_level2_tree1_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_conv2/Conv [Conv] outputs: [/backbone_base_level2_tree1_conv2/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree1_Add/Add [Add]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_project_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_Add/Add [Add] inputs: [/backbone_base_level2_tree1_conv2/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], [/backbone_base_level2_project_0/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree1_Add/Add for ONNX node: /backbone_base_level2_tree1_Add/Add
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree1_Add/Add_output_0 for ONNX tensor: /backbone_base_level2_tree1_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_Add/Add [Add] outputs: [/backbone_base_level2_tree1_Add/Add_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree1_Add/relu/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_Add/relu/Relu [Relu] inputs: [/backbone_base_level2_tree1_Add/Add_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree1_Add/relu/Relu for ONNX node: /backbone_base_level2_tree1_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree1_Add/relu/Relu_output_0 for ONNX tensor: /backbone_base_level2_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree1_Add/relu/Relu [Relu] outputs: [/backbone_base_level2_tree1_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree2_conv1/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1842
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_810
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv1/Conv [Conv] inputs: [/backbone_base_level2_tree1_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1842 -> (48, 48, 3, 3)[FLOAT]], [_v_810 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree2_conv1/Conv for ONNX node: /backbone_base_level2_tree2_conv1/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree2_conv1/Conv_output_0 for ONNX tensor: /backbone_base_level2_tree2_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv1/Conv [Conv] outputs: [/backbone_base_level2_tree2_conv1/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree2_conv1/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree2_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv1/Relu [Relu] inputs: [/backbone_base_level2_tree2_conv1/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree2_conv1/Relu for ONNX node: /backbone_base_level2_tree2_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree2_conv1/Relu_output_0 for ONNX tensor: /backbone_base_level2_tree2_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv1/Relu [Relu] outputs: [/backbone_base_level2_tree2_conv1/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree2_conv2/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree2_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1853
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_813
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv2/Conv [Conv] inputs: [/backbone_base_level2_tree2_conv1/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1853 -> (48, 48, 3, 3)[FLOAT]], [_v_813 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree2_conv2/Conv for ONNX node: /backbone_base_level2_tree2_conv2/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree2_conv2/Conv_output_0 for ONNX tensor: /backbone_base_level2_tree2_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_conv2/Conv [Conv] outputs: [/backbone_base_level2_tree2_conv2/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree2_Add/Add [Add]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree2_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_Add/Add [Add] inputs: [/backbone_base_level2_tree2_conv2/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], [/backbone_base_level2_tree1_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree2_Add/Add for ONNX node: /backbone_base_level2_tree2_Add/Add
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree2_Add/Add_output_0 for ONNX tensor: /backbone_base_level2_tree2_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_Add/Add [Add] outputs: [/backbone_base_level2_tree2_Add/Add_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_tree2_Add/relu/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree2_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_Add/relu/Relu [Relu] inputs: [/backbone_base_level2_tree2_Add/Add_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_tree2_Add/relu/Relu for ONNX node: /backbone_base_level2_tree2_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_tree2_Add/relu/Relu_output_0 for ONNX tensor: /backbone_base_level2_tree2_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_tree2_Add/relu/Relu [Relu] outputs: [/backbone_base_level2_tree2_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /Concat [Concat]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree2_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /Concat [Concat] inputs: [/backbone_base_level2_tree2_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], [/backbone_base_level2_tree1_Add/relu/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /Concat for ONNX node: /Concat
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /Concat_output_0 for ONNX tensor: /Concat_output_0
[06/13/2024-10:16:09] [V] [TRT] /Concat [Concat] outputs: [/Concat_output_0 -> (-1, 96, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_root_conv/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /Concat_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1866
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_816
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_root_conv/Conv [Conv] inputs: [/Concat_output_0 -> (-1, 96, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1866 -> (48, 96, 1, 1)[FLOAT]], [_v_816 -> (48)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 96, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_root_conv/Conv for ONNX node: /backbone_base_level2_root_conv/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_root_conv/Conv_output_0 for ONNX tensor: /backbone_base_level2_root_conv/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_root_conv/Conv [Conv] outputs: [/backbone_base_level2_root_conv/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level2_root_conv/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_root_conv/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_root_conv/Relu [Relu] inputs: [/backbone_base_level2_root_conv/Conv_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level2_root_conv/Relu for ONNX node: /backbone_base_level2_root_conv/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level2_root_conv/Relu_output_0 for ONNX tensor: /backbone_base_level2_root_conv/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level2_root_conv/Relu [Relu] outputs: [/backbone_base_level2_root_conv/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree1_conv1/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_root_conv/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1888
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_822
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv1/Conv [Conv] inputs: [/backbone_base_level2_root_conv/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], [onnx::QuantizeLinear_1888 -> (64, 48, 3, 3)[FLOAT]], [_v_822 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 48, 88, 160)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree1_conv1/Conv for ONNX node: /backbone_base_level3_tree1_conv1/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree1_conv1/Conv_output_0 for ONNX tensor: /backbone_base_level3_tree1_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv1/Conv [Conv] outputs: [/backbone_base_level3_tree1_conv1/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_downsample/MaxPool [MaxPool]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level2_root_conv/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_downsample/MaxPool [MaxPool] inputs: [/backbone_base_level2_root_conv/Relu_output_0 -> (-1, 48, 88, 160)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_downsample/MaxPool for ONNX node: /backbone_base_level3_downsample/MaxPool
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_downsample/MaxPool_output_0 for ONNX tensor: /backbone_base_level3_downsample/MaxPool_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_downsample/MaxPool [MaxPool] outputs: [/backbone_base_level3_downsample/MaxPool_output_0 -> (-1, 48, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree1_conv1/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv1/Relu [Relu] inputs: [/backbone_base_level3_tree1_conv1/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree1_conv1/Relu for ONNX node: /backbone_base_level3_tree1_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree1_conv1/Relu_output_0 for ONNX tensor: /backbone_base_level3_tree1_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv1/Relu [Relu] outputs: [/backbone_base_level3_tree1_conv1/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_project_0/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_downsample/MaxPool_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1910
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_828
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_project_0/Conv [Conv] inputs: [/backbone_base_level3_downsample/MaxPool_output_0 -> (-1, 48, 44, 80)[FLOAT]], [onnx::QuantizeLinear_1910 -> (64, 48, 1, 1)[FLOAT]], [_v_828 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 48, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_project_0/Conv for ONNX node: /backbone_base_level3_project_0/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_project_0/Conv_output_0 for ONNX tensor: /backbone_base_level3_project_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_project_0/Conv [Conv] outputs: [/backbone_base_level3_project_0/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree1_conv2/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1899
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_825
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv2/Conv [Conv] inputs: [/backbone_base_level3_tree1_conv1/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], [onnx::QuantizeLinear_1899 -> (64, 64, 3, 3)[FLOAT]], [_v_825 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree1_conv2/Conv for ONNX node: /backbone_base_level3_tree1_conv2/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree1_conv2/Conv_output_0 for ONNX tensor: /backbone_base_level3_tree1_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_conv2/Conv [Conv] outputs: [/backbone_base_level3_tree1_conv2/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree1_Add/Add [Add]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_project_0/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_Add/Add [Add] inputs: [/backbone_base_level3_tree1_conv2/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], [/backbone_base_level3_project_0/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree1_Add/Add for ONNX node: /backbone_base_level3_tree1_Add/Add
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree1_Add/Add_output_0 for ONNX tensor: /backbone_base_level3_tree1_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_Add/Add [Add] outputs: [/backbone_base_level3_tree1_Add/Add_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree1_Add/relu/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_Add/relu/Relu [Relu] inputs: [/backbone_base_level3_tree1_Add/Add_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree1_Add/relu/Relu for ONNX node: /backbone_base_level3_tree1_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree1_Add/relu/Relu_output_0 for ONNX tensor: /backbone_base_level3_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree1_Add/relu/Relu [Relu] outputs: [/backbone_base_level3_tree1_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree2_conv1/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1922
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_831
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv1/Conv [Conv] inputs: [/backbone_base_level3_tree1_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], [onnx::QuantizeLinear_1922 -> (64, 64, 3, 3)[FLOAT]], [_v_831 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree2_conv1/Conv for ONNX node: /backbone_base_level3_tree2_conv1/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree2_conv1/Conv_output_0 for ONNX tensor: /backbone_base_level3_tree2_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv1/Conv [Conv] outputs: [/backbone_base_level3_tree2_conv1/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree2_conv1/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree2_conv1/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv1/Relu [Relu] inputs: [/backbone_base_level3_tree2_conv1/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree2_conv1/Relu for ONNX node: /backbone_base_level3_tree2_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree2_conv1/Relu_output_0 for ONNX tensor: /backbone_base_level3_tree2_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv1/Relu [Relu] outputs: [/backbone_base_level3_tree2_conv1/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree2_conv2/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree2_conv1/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1933
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_834
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv2/Conv [Conv] inputs: [/backbone_base_level3_tree2_conv1/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], [onnx::QuantizeLinear_1933 -> (64, 64, 3, 3)[FLOAT]], [_v_834 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree2_conv2/Conv for ONNX node: /backbone_base_level3_tree2_conv2/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree2_conv2/Conv_output_0 for ONNX tensor: /backbone_base_level3_tree2_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_conv2/Conv [Conv] outputs: [/backbone_base_level3_tree2_conv2/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree2_Add/Add [Add]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree2_conv2/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_Add/Add [Add] inputs: [/backbone_base_level3_tree2_conv2/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], [/backbone_base_level3_tree1_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree2_Add/Add for ONNX node: /backbone_base_level3_tree2_Add/Add
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree2_Add/Add_output_0 for ONNX tensor: /backbone_base_level3_tree2_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_Add/Add [Add] outputs: [/backbone_base_level3_tree2_Add/Add_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_tree2_Add/relu/Relu [Relu]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree2_Add/Add_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_Add/relu/Relu [Relu] inputs: [/backbone_base_level3_tree2_Add/Add_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_tree2_Add/relu/Relu for ONNX node: /backbone_base_level3_tree2_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_tree2_Add/relu/Relu_output_0 for ONNX tensor: /backbone_base_level3_tree2_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_tree2_Add/relu/Relu [Relu] outputs: [/backbone_base_level3_tree2_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /Concat_1 [Concat]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree2_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_tree1_Add/relu/Relu_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /backbone_base_level3_downsample/MaxPool_output_0
[06/13/2024-10:16:09] [V] [TRT] /Concat_1 [Concat] inputs: [/backbone_base_level3_tree2_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], [/backbone_base_level3_tree1_Add/relu/Relu_output_0 -> (-1, 64, 44, 80)[FLOAT]], [/backbone_base_level3_downsample/MaxPool_output_0 -> (-1, 48, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /Concat_1 for ONNX node: /Concat_1
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /Concat_1_output_0 for ONNX tensor: /Concat_1_output_0
[06/13/2024-10:16:09] [V] [TRT] /Concat_1 [Concat] outputs: [/Concat_1_output_0 -> (-1, 176, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Parsing node: /backbone_base_level3_root_conv/Conv [Conv]
[06/13/2024-10:16:09] [V] [TRT] Searching for input: /Concat_1_output_0
[06/13/2024-10:16:09] [V] [TRT] Searching for input: onnx::QuantizeLinear_1946
[06/13/2024-10:16:09] [V] [TRT] Searching for input: _v_837
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_root_conv/Conv [Conv] inputs: [/Concat_1_output_0 -> (-1, 176, 44, 80)[FLOAT]], [onnx::QuantizeLinear_1946 -> (64, 176, 1, 1)[FLOAT]], [_v_837 -> (64)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Convolution input dimensions: (-1, 176, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering layer: /backbone_base_level3_root_conv/Conv for ONNX node: /backbone_base_level3_root_conv/Conv
[06/13/2024-10:16:09] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64
[06/13/2024-10:16:09] [V] [TRT] Convolution output dimensions: (-1, 64, 44, 80)
[06/13/2024-10:16:09] [V] [TRT] Registering tensor: /backbone_base_level3_root_conv/Conv_output_0_0 for ONNX tensor: /backbone_base_level3_root_conv/Conv_output_0
[06/13/2024-10:16:09] [V] [TRT] /backbone_base_level3_root_conv/Conv [Conv] outputs: [/backbone_base_level3_root_conv/Conv_output_0 -> (-1, 64, 44, 80)[FLOAT]], 
[06/13/2024-10:16:09] [V] [TRT] Marking /backbone_base_level3_root_conv/Conv_output_0_0 as output: /backbone_base_level3_root_conv/Conv_output_0
[06/13/2024-10:16:09] [I] Finish parsing network model
[06/13/2024-10:16:09] [E] Error[3]: [builderConfig.cpp::setFlag::75] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builderConfig.cpp::setFlag::75, condition: builderFlag != BuilderFlag::kPREFER_PRECISION_CONSTRAINTS || !flags[BuilderFlag::kOBEY_PRECISION_CONSTRAINTS]. kPREFER_PRECISION_CONSTRAINTS cannot be set if kOBEY_PRECISION_CONSTRAINTS is set.
)
[06/13/2024-10:16:09] [V] [TRT] Original: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After dead-layer removal: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After Myelin optimization: 36 layers
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_base_layer_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_base_layer_0/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level0_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level0_0/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level1_0/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level1_0/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level2_tree1_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level2_tree1_conv1/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level2_tree1_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level2_tree1_Add/relu/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level2_tree2_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level2_tree2_conv1/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level2_tree2_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level2_tree2_Add/relu/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level2_root_conv/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level2_root_conv/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level3_tree1_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level3_tree1_conv1/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level3_tree1_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level3_tree1_Add/relu/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level3_tree2_conv1/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level3_tree2_conv1/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] Running: ActivationToPointwiseConversion on /backbone_base_level3_tree2_Add/relu/Relu
[06/13/2024-10:16:09] [V] [TRT] Swap the layer type of /backbone_base_level3_tree2_Add/relu/Relu from ACTIVATION to POINTWISE
[06/13/2024-10:16:09] [V] [TRT] After final dead-layer removal: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After vertical fusions: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After final dead-layer removal: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After slice removal: 36 layers
[06/13/2024-10:16:09] [V] [TRT] Eliminating concatenation /Concat_1
[06/13/2024-10:16:09] [V] [TRT] Generating copy for /backbone_base_level3_tree2_Add/relu/Relu_output_0 to /Concat_1_output_0 because copy elision is disabled for concat.
[06/13/2024-10:16:09] [V] [TRT] Generating copy for /backbone_base_level3_tree1_Add/relu/Relu_output_0 to /Concat_1_output_0 because copy elision is disabled for concat.
[06/13/2024-10:16:09] [V] [TRT] Generating copy for /backbone_base_level3_downsample/MaxPool_output_0 to /Concat_1_output_0 because copy elision is disabled for concat.
[06/13/2024-10:16:09] [V] [TRT] Eliminating concatenation /Concat
[06/13/2024-10:16:09] [V] [TRT] Generating copy for /backbone_base_level2_tree2_Add/relu/Relu_output_0 to /Concat_output_0 because copy elision is disabled for concat.
[06/13/2024-10:16:09] [V] [TRT] Generating copy for /backbone_base_level2_tree1_Add/relu/Relu_output_0 to /Concat_output_0 because copy elision is disabled for concat.
[06/13/2024-10:16:09] [V] [TRT] After concat removal: 39 layers
[06/13/2024-10:16:09] [V] [TRT] After tensor merging: 39 layers
[06/13/2024-10:16:09] [V] [TRT] Trying to split Reshape and strided tensor
[06/13/2024-10:16:09] [I] [TRT] Reading Calibration Cache for calibrator: EntropyCalibration2
[06/13/2024-10:16:09] [I] [TRT] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[06/13/2024-10:16:09] [I] [TRT] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /normalize_input/Conv_output_0 scale and zero-point Quantization(scale: {0.0204256,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_base_layer_0/Conv_output_0 scale and zero-point Quantization(scale: {0.0278693,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_base_layer_0/Relu_output_0 scale and zero-point Quantization(scale: {0.0278693,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level0_0/Conv_output_0 scale and zero-point Quantization(scale: {0.086582,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level0_0/Relu_output_0 scale and zero-point Quantization(scale: {0.086582,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level1_0/Conv_output_0 scale and zero-point Quantization(scale: {0.0698093,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level1_0/Relu_output_0 scale and zero-point Quantization(scale: {0.0698093,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree1_conv1/Conv_output_0 scale and zero-point Quantization(scale: {0.0791153,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_downsample/MaxPool_output_0 scale and zero-point Quantization(scale: {0.0698093,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree1_conv1/Relu_output_0 scale and zero-point Quantization(scale: {0.0791153,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_project_0/Conv_output_0 scale and zero-point Quantization(scale: {0.0799413,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree1_conv2/Conv_output_0 scale and zero-point Quantization(scale: {0.0935553,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree1_Add/Add_output_0 scale and zero-point Quantization(scale: {0.104235,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree1_Add/relu/Relu_output_0 scale and zero-point Quantization(scale: {0.104235,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree2_conv1/Conv_output_0 scale and zero-point Quantization(scale: {0.0735086,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree2_conv1/Relu_output_0 scale and zero-point Quantization(scale: {0.0735086,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree2_conv2/Conv_output_0 scale and zero-point Quantization(scale: {0.122808,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree2_Add/Add_output_0 scale and zero-point Quantization(scale: {0.137775,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_tree2_Add/relu/Relu_output_0 scale and zero-point Quantization(scale: {0.137775,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /Concat_output_0 scale and zero-point Quantization(scale: {0.127774,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_root_conv/Conv_output_0 scale and zero-point Quantization(scale: {0.0765355,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level2_root_conv/Relu_output_0 scale and zero-point Quantization(scale: {0.0765355,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree1_conv1/Conv_output_0 scale and zero-point Quantization(scale: {0.0784342,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_downsample/MaxPool_output_0 scale and zero-point Quantization(scale: {0.0765355,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree1_conv1/Relu_output_0 scale and zero-point Quantization(scale: {0.0784342,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_project_0/Conv_output_0 scale and zero-point Quantization(scale: {0.0463886,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree1_conv2/Conv_output_0 scale and zero-point Quantization(scale: {0.0750571,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree1_Add/Add_output_0 scale and zero-point Quantization(scale: {0.0724247,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree1_Add/relu/Relu_output_0 scale and zero-point Quantization(scale: {0.0724247,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree2_conv1/Conv_output_0 scale and zero-point Quantization(scale: {0.0590003,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree2_conv1/Relu_output_0 scale and zero-point Quantization(scale: {0.0590003,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree2_conv2/Conv_output_0 scale and zero-point Quantization(scale: {0.111902,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree2_Add/Add_output_0 scale and zero-point Quantization(scale: {0.12398,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /backbone_base_level3_tree2_Add/relu/Relu_output_0 scale and zero-point Quantization(scale: {0.12398,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] INT8 Inference Tensor scales and zero-points: /Concat_1_output_0 scale and zero-point Quantization(scale: {0.116908,}, zero-point: {0,})
[06/13/2024-10:16:09] [V] [TRT] Configuring builder for Int8 Mode completed in 0.00409282 seconds.
[06/13/2024-10:16:09] [W] [TRT] Missing scale and zero-point for tensor img, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[06/13/2024-10:16:09] [W] [TRT] Missing scale and zero-point for tensor /backbone_base_level3_root_conv/Conv_output_0, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[06/13/2024-10:16:09] [V] [TRT] Original: 36 layers
[06/13/2024-10:16:09] [V] [TRT] After dead-layer removal: 36 layers
[06/13/2024-10:16:09] [W] [TRT] Node: /backbone_base_level3_root_conv/Conv cannot run in INT8 mode due to missing scale and zero point. Attempting to fall back to FP16.
[06/13/2024-10:16:09] [V] [TRT] Applying generic optimizations to the graph for inference.
[06/13/2024-10:16:10] [V] [TRT] {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} successfully offloaded to DLA.
[06/13/2024-10:16:10] [V] [TRT] Memory consumption details:
[06/13/2024-10:16:10] [V] [TRT] 	Pool Sizes: Managed SRAM = 0.5 MiB,	Local DRAM = 1024 MiB,	Global DRAM = 512 MiB
[06/13/2024-10:16:10] [V] [TRT] 	Required: Managed SRAM = 0.5 MiB,	Local DRAM = 256 MiB,	Global DRAM = 4 MiB
[06/13/2024-10:16:10] [V] [TRT] DLA Memory Consumption Summary:
[06/13/2024-10:16:10] [V] [TRT] 	Number of DLA node candidates offloaded : 1 out of 1
[06/13/2024-10:16:10] [V] [TRT] 	Total memory required by accepted candidates : Managed SRAM = 0.5 MiB,	Local DRAM = 256 MiB,	Global DRAM = 4 MiB
[06/13/2024-10:16:10] [V] [TRT] After DLA optimization: 1 layers
[06/13/2024-10:16:10] [V] [TRT] Graph construction and optimization completed in 1.73826 seconds.
[06/13/2024-10:16:10] [I] [TRT] ---------- Layers Running on DLA ----------
[06/13/2024-10:16:10] [I] [TRT] [DlaLayer] {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:10] [I] [TRT] ---------- Layers Running on GPU ----------
[06/13/2024-10:16:10] [V] [TRT] Trying to load shared library libcublas.so.11
[06/13/2024-10:16:10] [V] [TRT] Loaded shared library libcublas.so.11
[06/13/2024-10:16:11] [V] [TRT] Using cublas as plugin tactic source
[06/13/2024-10:16:11] [V] [TRT] Trying to load shared library libcublasLt.so.11
[06/13/2024-10:16:11] [V] [TRT] Loaded shared library libcublasLt.so.11
[06/13/2024-10:16:11] [V] [TRT] Using cublasLt as core library tactic source
[06/13/2024-10:16:11] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +824, now: CPU 1109, GPU 8900 (MiB)
[06/13/2024-10:16:11] [V] [TRT] Trying to load shared library libcudnn.so.8
[06/13/2024-10:16:11] [V] [TRT] Loaded shared library libcudnn.so.8
[06/13/2024-10:16:11] [V] [TRT] Using cuDNN as plugin tactic source
[06/13/2024-10:16:12] [V] [TRT] Using cuDNN as core library tactic source
[06/13/2024-10:16:12] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +83, GPU +113, now: CPU 1192, GPU 9013 (MiB)
[06/13/2024-10:16:12] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[06/13/2024-10:16:12] [V] [TRT] Constructing optimization profile number 0 [1/1].
[06/13/2024-10:16:12] [V] [TRT] 0 requirements combinations were removed due to type constraints.
[06/13/2024-10:16:12] [V] [TRT] Reserving memory for host IO tensors. Host: 0 bytes
[06/13/2024-10:16:12] [V] [TRT] =============== Computing reformatting costs: 
[06/13/2024-10:16:12] [V] [TRT] *************** Autotuning Reformat: Half(675840,225280,640,1) -> Half(225280,225280:16,640,1) ***************
[06/13/2024-10:16:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(img -> <out>) (Reformat)
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x00000000000003e8 Time: 16.5904
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x00000000000003ea Time: 3.11351
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x0000000000000000 Time: 16.2153
[06/13/2024-10:16:12] [V] [TRT] Fastest Tactic: 0x00000000000003ea Time: 3.11351
[06/13/2024-10:16:12] [V] [TRT] =============== Computing reformatting costs: 
[06/13/2024-10:16:12] [V] [TRT] *************** Autotuning Reformat: Half(14080,3520:16,80,1) -> Half(270336,4224,96,1) ***************
[06/13/2024-10:16:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(<in> -> /backbone_base_level3_root_conv/Conv_output_0) (Reformat)
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x00000000000003e8 Time: 0.414665
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x00000000000003ea Time: 0.396521
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for 
[06/13/2024-10:16:12] [V] [TRT] Tactic: 0x0000000000000000 Time: 0.413541
[06/13/2024-10:16:12] [V] [TRT] Fastest Tactic: 0x00000000000003ea Time: 0.396521
[06/13/2024-10:16:12] [V] [TRT] =============== Computing costs for 
[06/13/2024-10:16:12] [V] [TRT] *************** Autotuning format combination: Half(675840,225280,640,1) -> Half(270336,4224,96,1) ***************
[06/13/2024-10:16:12] [V] [TRT] --------------- Timing Runner: {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} (DLA)
[06/13/2024-10:16:12] [V] [TRT] Setting a default quantization params because quantization data is missing for {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:13] [V] [TRT] Tactic: 0x0000000000000003 Time: 34.1704
[06/13/2024-10:16:13] [V] [TRT] Fastest Tactic: 0x0000000000000003 Time: 34.1704
[06/13/2024-10:16:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: DLA Tactic: 0x0000000000000003
[06/13/2024-10:16:13] [V] [TRT] *************** Autotuning format combination: Half(675840,225280,640,1) -> Half(14080,3520:16,80,1) ***************
[06/13/2024-10:16:13] [V] [TRT] --------------- Timing Runner: {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} (DLA)
[06/13/2024-10:16:13] [V] [TRT] Setting a default quantization params because quantization data is missing for {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:13] [V] [TRT] Tactic: 0x0000000000000003 Time: 33.8291
[06/13/2024-10:16:13] [V] [TRT] Fastest Tactic: 0x0000000000000003 Time: 33.8291
[06/13/2024-10:16:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: DLA Tactic: 0x0000000000000003
[06/13/2024-10:16:13] [V] [TRT] *************** Autotuning format combination: Half(225280,225280:16,640,1) -> Half(270336,4224,96,1) ***************
[06/13/2024-10:16:13] [V] [TRT] --------------- Timing Runner: {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} (DLA)
[06/13/2024-10:16:14] [V] [TRT] Setting a default quantization params because quantization data is missing for {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:14] [V] [TRT] Tactic: 0x0000000000000003 Time: 28.8579
[06/13/2024-10:16:14] [V] [TRT] Fastest Tactic: 0x0000000000000003 Time: 28.8579
[06/13/2024-10:16:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: DLA Tactic: 0x0000000000000003
[06/13/2024-10:16:14] [V] [TRT] *************** Autotuning format combination: Half(225280,225280:16,640,1) -> Half(14080,3520:16,80,1) ***************
[06/13/2024-10:16:14] [V] [TRT] --------------- Timing Runner: {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} (DLA)
[06/13/2024-10:16:15] [V] [TRT] Setting a default quantization params because quantization data is missing for {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:15] [V] [TRT] Tactic: 0x0000000000000003 Time: 28.5168
[06/13/2024-10:16:15] [V] [TRT] Fastest Tactic: 0x0000000000000003 Time: 28.5168
[06/13/2024-10:16:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: DLA Tactic: 0x0000000000000003
[06/13/2024-10:16:15] [V] [TRT] Formats and tactics selection completed in 3.21948 seconds.
[06/13/2024-10:16:15] [V] [TRT] After reformat layers: 1 layers
[06/13/2024-10:16:15] [V] [TRT] Total number of blocks in pre-optimized block assignment: 1
[06/13/2024-10:16:15] [I] [TRT] Total Activation Memory: 32078786560
[06/13/2024-10:16:15] [V] [TRT] {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}: DLA Network Layer Information:
Layer(CONVOLUTION): /normalize_input/Conv, Precision: Half, img'[Half([8,3,352,640])] -> /normalize_input/Conv_output_0'[Int8([8,3,352,640])]
Layer(CONVOLUTION): /backbone_base_base_layer_0/Conv, Precision: Int8, /normalize_input/Conv_output_0'[Int8([8,3,352,640])] -> /backbone_base_base_layer_0/Conv_output_0'[Int8([8,16,176,320])]
Layer(ACTIVATION): /backbone_base_base_layer_0/Relu, Precision: Int8, /backbone_base_base_layer_0/Conv_output_0'[Int8([8,16,176,320])] -> /backbone_base_base_layer_0/Relu_output_0'[Int8([8,16,176,320])]
Layer(CONVOLUTION): /backbone_base_level0_0/Conv, Precision: Int8, /backbone_base_base_layer_0/Relu_output_0'[Int8([8,16,176,320])] -> /backbone_base_level0_0/Conv_output_0'[Int8([8,16,176,320])]
Layer(ACTIVATION): /backbone_base_level0_0/Relu, Precision: Int8, /backbone_base_level0_0/Conv_output_0'[Int8([8,16,176,320])] -> /backbone_base_level0_0/Relu_output_0'[Int8([8,16,176,320])]
Layer(CONVOLUTION): /backbone_base_level1_0/Conv, Precision: Int8, /backbone_base_level0_0/Relu_output_0'[Int8([8,16,176,320])] -> /backbone_base_level1_0/Conv_output_0'[Int8([8,32,176,320])]
Layer(ACTIVATION): /backbone_base_level1_0/Relu, Precision: Int8, /backbone_base_level1_0/Conv_output_0'[Int8([8,32,176,320])] -> /backbone_base_level1_0/Relu_output_0'[Int8([8,32,176,320])]
Layer(CONVOLUTION): /backbone_base_level2_tree1_conv1/Conv, Precision: Int8, /backbone_base_level1_0/Relu_output_0'[Int8([8,32,176,320])] -> /backbone_base_level2_tree1_conv1/Conv_output_0'[Int8([8,48,88,160])]
Layer(POOLING): /backbone_base_level2_downsample/MaxPool, Precision: Int8, /backbone_base_level1_0/Relu_output_0'[Int8([8,32,176,320])] -> /backbone_base_level2_downsample/MaxPool_output_0'[Int8([8,32,88,160])]
Layer(ACTIVATION): /backbone_base_level2_tree1_conv1/Relu, Precision: Int8, /backbone_base_level2_tree1_conv1/Conv_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree1_conv1/Relu_output_0'[Int8([8,48,88,160])]
Layer(CONVOLUTION): /backbone_base_level2_project_0/Conv, Precision: Int8, /backbone_base_level2_downsample/MaxPool_output_0'[Int8([8,32,88,160])] -> /backbone_base_level2_project_0/Conv_output_0'[Int8([8,48,88,160])]
Layer(CONVOLUTION): /backbone_base_level2_tree1_conv2/Conv, Precision: Int8, /backbone_base_level2_tree1_conv1/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree1_conv2/Conv_output_0'[Int8([8,48,88,160])]
Layer(ELEMENTWISE): /backbone_base_level2_tree1_Add/Add, Precision: Int8, /backbone_base_level2_tree1_conv2/Conv_output_0'[Int8([8,48,88,160])], /backbone_base_level2_project_0/Conv_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree1_Add/Add_output_0'[Int8([8,48,88,160])]
Layer(ACTIVATION): /backbone_base_level2_tree1_Add/relu/Relu, Precision: Int8, /backbone_base_level2_tree1_Add/Add_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree1_Add/relu/Relu_output_0'[Int8([8,48,88,160])]
Layer(CONVOLUTION): /backbone_base_level2_tree2_conv1/Conv, Precision: Int8, /backbone_base_level2_tree1_Add/relu/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree2_conv1/Conv_output_0'[Int8([8,48,88,160])]
Layer(ACTIVATION): /backbone_base_level2_tree2_conv1/Relu, Precision: Int8, /backbone_base_level2_tree2_conv1/Conv_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree2_conv1/Relu_output_0'[Int8([8,48,88,160])]
Layer(CONVOLUTION): /backbone_base_level2_tree2_conv2/Conv, Precision: Int8, /backbone_base_level2_tree2_conv1/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree2_conv2/Conv_output_0'[Int8([8,48,88,160])]
Layer(ELEMENTWISE): /backbone_base_level2_tree2_Add/Add, Precision: Int8, /backbone_base_level2_tree2_conv2/Conv_output_0'[Int8([8,48,88,160])], /backbone_base_level2_tree1_Add/relu/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree2_Add/Add_output_0'[Int8([8,48,88,160])]
Layer(ACTIVATION): /backbone_base_level2_tree2_Add/relu/Relu, Precision: Int8, /backbone_base_level2_tree2_Add/Add_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_tree2_Add/relu/Relu_output_0'[Int8([8,48,88,160])]
Layer(CONCATENATION): /Concat, Precision: Int8, /backbone_base_level2_tree2_Add/relu/Relu_output_0'[Int8([8,48,88,160])], /backbone_base_level2_tree1_Add/relu/Relu_output_0'[Int8([8,48,88,160])] -> /Concat_output_0'[Int8([1,96,88,160])]
Layer(CONVOLUTION): /backbone_base_level2_root_conv/Conv, Precision: Int8, /Concat_output_0'[Int8([1,96,88,160])] -> /backbone_base_level2_root_conv/Conv_output_0'[Int8([8,48,88,160])]
Layer(ACTIVATION): /backbone_base_level2_root_conv/Relu, Precision: Int8, /backbone_base_level2_root_conv/Conv_output_0'[Int8([8,48,88,160])] -> /backbone_base_level2_root_conv/Relu_output_0'[Int8([8,48,88,160])]
Layer(CONVOLUTION): /backbone_base_level3_tree1_conv1/Conv, Precision: Int8, /backbone_base_level2_root_conv/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level3_tree1_conv1/Conv_output_0'[Int8([8,64,44,80])]
Layer(POOLING): /backbone_base_level3_downsample/MaxPool, Precision: Int8, /backbone_base_level2_root_conv/Relu_output_0'[Int8([8,48,88,160])] -> /backbone_base_level3_downsample/MaxPool_output_0'[Int8([8,48,44,80])]
Layer(ACTIVATION): /backbone_base_level3_tree1_conv1/Relu, Precision: Int8, /backbone_base_level3_tree1_conv1/Conv_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree1_conv1/Relu_output_0'[Int8([8,64,44,80])]
Layer(CONVOLUTION): /backbone_base_level3_project_0/Conv, Precision: Int8, /backbone_base_level3_downsample/MaxPool_output_0'[Int8([8,48,44,80])] -> /backbone_base_level3_project_0/Conv_output_0'[Int8([8,64,44,80])]
Layer(CONVOLUTION): /backbone_base_level3_tree1_conv2/Conv, Precision: Int8, /backbone_base_level3_tree1_conv1/Relu_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree1_conv2/Conv_output_0'[Int8([8,64,44,80])]
Layer(ELEMENTWISE): /backbone_base_level3_tree1_Add/Add, Precision: Int8, /backbone_base_level3_tree1_conv2/Conv_output_0'[Int8([8,64,44,80])], /backbone_base_level3_project_0/Conv_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree1_Add/Add_output_0'[Int8([8,64,44,80])]
Layer(ACTIVATION): /backbone_base_level3_tree1_Add/relu/Relu, Precision: Int8, /backbone_base_level3_tree1_Add/Add_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree1_Add/relu/Relu_output_0'[Int8([8,64,44,80])]
Layer(CONVOLUTION): /backbone_base_level3_tree2_conv1/Conv, Precision: Int8, /backbone_base_level3_tree1_Add/relu/Relu_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree2_conv1/Conv_output_0'[Int8([8,64,44,80])]
Layer(ACTIVATION): /backbone_base_level3_tree2_conv1/Relu, Precision: Int8, /backbone_base_level3_tree2_conv1/Conv_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree2_conv1/Relu_output_0'[Int8([8,64,44,80])]
Layer(CONVOLUTION): /backbone_base_level3_tree2_conv2/Conv, Precision: Int8, /backbone_base_level3_tree2_conv1/Relu_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree2_conv2/Conv_output_0'[Int8([8,64,44,80])]
Layer(ELEMENTWISE): /backbone_base_level3_tree2_Add/Add, Precision: Int8, /backbone_base_level3_tree2_conv2/Conv_output_0'[Int8([8,64,44,80])], /backbone_base_level3_tree1_Add/relu/Relu_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree2_Add/Add_output_0'[Int8([8,64,44,80])]
Layer(ACTIVATION): /backbone_base_level3_tree2_Add/relu/Relu, Precision: Int8, /backbone_base_level3_tree2_Add/Add_output_0'[Int8([8,64,44,80])] -> /backbone_base_level3_tree2_Add/relu/Relu_output_0'[Int8([8,64,44,80])]
Layer(CONCATENATION): /Concat_1, Precision: Int8, /backbone_base_level3_tree2_Add/relu/Relu_output_0'[Int8([8,64,44,80])], /backbone_base_level3_tree1_Add/relu/Relu_output_0'[Int8([8,64,44,80])], /backbone_base_level3_downsample/MaxPool_output_0'[Int8([8,48,44,80])] -> /Concat_1_output_0'[Half([1,176,44,80])]
Layer(CONVOLUTION): /backbone_base_level3_root_conv/Conv, Precision: Half, /Concat_1_output_0'[Half([1,176,44,80])] -> /backbone_base_level3_root_conv/Conv_output_0'[Half([8,64,44,80])]
[06/13/2024-10:16:15] [V] [TRT] Setting a default quantization params because quantization data is missing for {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}
[06/13/2024-10:16:15] [V] [TRT] Layer: {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]} Host Persistent: 88 Device Persistent: 0 Scratch Memory: 0
[06/13/2024-10:16:15] [V] [TRT] Skipped printing memory information for 0 layers with 0 memory size i.e. Host Persistent + Device Persistent + Scratch Memory == 0.
[06/13/2024-10:16:15] [I] [TRT] Total Host Persistent Memory: 96
[06/13/2024-10:16:15] [I] [TRT] Total Device Persistent Memory: 0
[06/13/2024-10:16:15] [I] [TRT] Total Scratch Memory: 0
[06/13/2024-10:16:15] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 1 MiB, GPU 69 MiB
[06/13/2024-10:16:15] [V] [TRT] Total number of blocks in optimized block assignment: 0
[06/13/2024-10:16:15] [I] [TRT] Total Activation Memory: 0
[06/13/2024-10:16:15] [V] [TRT] Total number of generated kernels selected for the engine: 0
[06/13/2024-10:16:15] [V] [TRT] Disabling unused tactic source: CUDNN
[06/13/2024-10:16:15] [V] [TRT] Disabling unused tactic source: CUBLAS, CUBLAS_LT
[06/13/2024-10:16:15] [V] [TRT] Disabling unused tactic source: EDGE_MASK_CONVOLUTIONS
[06/13/2024-10:16:15] [V] [TRT] Disabling unused tactic source: JIT_CONVOLUTIONS
[06/13/2024-10:16:15] [V] [TRT] Engine generation completed in 4.92113 seconds.
[06/13/2024-10:16:15] [V] [TRT] Deleting timing cache: 2 entries, served 0 hits since creation.
[06/13/2024-10:16:15] [V] [TRT] Engine Layer Information:
Layer(DLA): {ForeignNode[/normalize_input/Conv.../backbone_base_level3_root_conv/Conv]}, Tactic: 0x0000000000000003, img (Half[8,3,352,640]) -> /backbone_base_level3_root_conv/Conv_output_0 (Half[8,64,44,80])
[06/13/2024-10:16:15] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +1, GPU +4, now: CPU 1, GPU 4 (MiB)
[06/13/2024-10:16:15] [I] Engine built in 10.2974 sec.
[06/13/2024-10:16:15] [I] [TRT] Loaded engine size: 1 MiB
[06/13/2024-10:16:15] [E] Error[9]: Cannot deserialize serialized engine built with EngineCapability::kDLA_STANDALONE, use cuDLA APIs instead.
[06/13/2024-10:16:15] [E] Error[4]: [runtime.cpp::deserializeCudaEngine::65] Error Code 4: Internal Error (Engine deserialization failed.)
[06/13/2024-10:16:15] [E] Engine deserialization failed
[06/13/2024-10:16:15] [I] Skipped inference phase since --buildOnly is added.
&&&& PASSED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --minShapes=img:8x3x352x640 --maxShapes=img:8x3x352x640 --optShapes=img:8x3x352x640 --shapes=img:8x3x352x640 --onnx=model_dync_bs1/TL_FULL_XG_qat_simplified_modified_noqdq.onnx --useDLACore=0 --buildDLAStandalone --saveEngine=model_dync_bs1/model.fp16.linearin.fp16linearout.standalone.bin --inputIOFormats=fp16:dla_linear --outputIOFormats=fp16:dla_linear --int8 --fp16 --verbose --calib=model_dync_bs1/TL_FULL_XG_qat_simplified_modified_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=/normalize_input/Conv:fp16,

Hi,

Do you compare the output between DLA and GPU?

Would you mind sharing the different with us?
Since the hardware is different, is it possible that the result contains slightly difference.

Thanks.