Yolov4-tiny onnx convert to trt

(base) uu@uu-Matrimax-PC:~/pytorch-YOLOv4$ /home/uu/TensorRT-7.0.0.11/targets/x86_64-linux-gnu/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_1_3_416_416_static_fp16.engine --workspace=4096 --fp16
&&&& RUNNING TensorRT.trtexec # /home/uu/TensorRT-7.0.0.11/targets/x86_64-linux-gnu/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_1_3_416_416_static_fp16.engine --workspace=4096 --fp16
[11/22/2020-10:02:42] [I] === Model Options ===
[11/22/2020-10:02:42] [I] Format: ONNX
[11/22/2020-10:02:42] [I] Model: yolov4_1_3_416_416_static.onnx
[11/22/2020-10:02:42] [I] Output:
[11/22/2020-10:02:42] [I] === Build Options ===
[11/22/2020-10:02:42] [I] Max batch: explicit
[11/22/2020-10:02:42] [I] Workspace: 4096 MB
[11/22/2020-10:02:42] [I] minTiming: 1
[11/22/2020-10:02:42] [I] avgTiming: 8
[11/22/2020-10:02:42] [I] Precision: FP16
[11/22/2020-10:02:42] [I] Calibration:
[11/22/2020-10:02:42] [I] Safe mode: Disabled
[11/22/2020-10:02:42] [I] Save engine: yolov4_1_3_416_416_static_fp16.engine
[11/22/2020-10:02:42] [I] Load engine:
[11/22/2020-10:02:42] [I] Inputs format: fp32:CHW
[11/22/2020-10:02:42] [I] Outputs format: fp32:CHW
[11/22/2020-10:02:42] [I] Input build shapes: model
[11/22/2020-10:02:42] [I] === System Options ===
[11/22/2020-10:02:42] [I] Device: 0
[11/22/2020-10:02:42] [I] DLACore:
[11/22/2020-10:02:42] [I] Plugins:
[11/22/2020-10:02:42] [I] === Inference Options ===
[11/22/2020-10:02:42] [I] Batch: Explicit
[11/22/2020-10:02:42] [I] Iterations: 10
[11/22/2020-10:02:42] [I] Duration: 3s (+ 200ms warm up)
[11/22/2020-10:02:42] [I] Sleep time: 0ms
[11/22/2020-10:02:42] [I] Streams: 1
[11/22/2020-10:02:42] [I] ExposeDMA: Disabled
[11/22/2020-10:02:42] [I] Spin-wait: Disabled
[11/22/2020-10:02:42] [I] Multithreading: Disabled
[11/22/2020-10:02:42] [I] CUDA Graph: Disabled
[11/22/2020-10:02:42] [I] Skip inference: Disabled
[11/22/2020-10:02:42] [I] Inputs:
[11/22/2020-10:02:42] [I] === Reporting Options ===
[11/22/2020-10:02:42] [I] Verbose: Disabled
[11/22/2020-10:02:42] [I] Averages: 10 inferences
[11/22/2020-10:02:42] [I] Percentile: 99
[11/22/2020-10:02:42] [I] Dump output: Disabled
[11/22/2020-10:02:42] [I] Profile: Disabled
[11/22/2020-10:02:42] [I] Export timing to JSON file:
[11/22/2020-10:02:42] [I] Export output to JSON file:
[11/22/2020-10:02:42] [I] Export profile to JSON file:
[11/22/2020-10:02:42] [I]

Input filename: yolov4_1_3_416_416_static.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/22/2020-10:02:42] [E] [TRT] Layer: Where_491’s output can not be used as shape tensor.
[11/22/2020-10:02:42] [E] [TRT] Network validation failed.
[11/22/2020-10:02:42] [E] Engine creation failed
[11/22/2020-10:02:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /home/uu/TensorRT-7.0.0.11/targets/x86_64-linux-gnu/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx --explicitBatch --saveEngine=yolov4_1_3_416_416_static_fp16.engine --workspace=4096 --fp16

when I try to convet oonx to trt,the erro occured
please help help me!!!

please help help me!!
thanks!!

Could you please provide the model and script file so we can help better?
Meanwhile could you please try on latest TRT release.

Thanks