Deepstream nvinfer error

When I load onnxfile by deepstream,
error message is :
gst-launch-1.0 rtspsrc location=rtsp://admin:admin@192.168.3.100:8557/h264 ! rtph264depay ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=tgie_config.txt ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! flvmux ! rtmpsink location=rtmp://192.168.1.213:1935/live/2020 sync=false
Setting pipeline to PAUSED …
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.422628374 14487 0x55c353af20 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 10003]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1719> [UID = 10003]: Trying to create engine from model files

Input filename: /home/nvidia/luozw/tensorRT-7/data/onnx/smoke_phone.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.5.5
Domain:
Model version: 0
Doc string:

WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: INT8 calibration file not specified. Trying FP16 mode.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: darknet/conv0/Conv2D__20, Conv__247, darknet/conv0/LeakyRelu, Conv__248, darknet/conv1/LeakyRelu, Conv__251, darknet/residual0/conv1/LeakyRelu, Conv__252, PWN(darknet/residual0/conv2/LeakyRelu, darknet/residual0/add), Conv__253, darknet/conv4/LeakyRelu, Conv__256, darknet/residual1/conv1/LeakyRelu, Conv__257, PWN(darknet/residual1/conv2/LeakyRelu, darknet/residual1/add), Conv__260, darknet/residual2/conv1/LeakyRelu, Conv__261, PWN(darknet/residual2/conv2/LeakyRelu, darknet/residual2/add), Conv__262, darknet/conv9/LeakyRelu, Conv__265, darknet/residual3/conv1/LeakyRelu, Conv__266, PWN(darknet/residual3/conv2/LeakyRelu, darknet/residual3/add), Conv__269, darknet/residual4/conv1/LeakyRelu, Conv__270, PWN(darknet/residual4/conv2/LeakyRelu, darknet/residual4/add), Conv__273, darknet/residual5/conv1/LeakyRelu, Conv__274, PWN(darknet/residual5/conv2/LeakyRelu, darknet/residual5/add), Conv__277, darknet/residual6/conv1/LeakyRelu, Conv__278, PWN(darknet/residual6/conv2/LeakyRelu, darknet/residual6/add), Conv__281, darknet/residual7/conv1/LeakyRelu, Conv__282, PWN(darknet/residual7/conv2/LeakyRelu, darknet/residual7/add), Conv__285, darknet/residual8/conv1/LeakyRelu, Conv__286, PWN(darknet/residual8/conv2/LeakyRelu, darknet/residual8/add), Conv__289, darknet/residual9/conv1/LeakyRelu, Conv__290, PWN(darknet/residual9/conv2/LeakyRelu, darknet/residual9/add), Conv__293, darknet/residual10/conv1/LeakyRelu, Conv__294, PWN(darknet/residual10/conv2/LeakyRelu, darknet/residual10/add), Conv__297, darknet/conv26/LeakyRelu, Conv__300, darknet/residual11/conv1/LeakyRelu, Conv__301, PWN(darknet/residual11/conv2/LeakyRelu, darknet/residual11/add), Conv__304, darknet/residual12/conv1/LeakyRelu, Conv__305, PWN(darknet/residual12/conv2/LeakyRelu, darknet/residual12/add), Conv__308, darknet/residual13/conv1/LeakyRelu, Conv__309, PWN(darknet/residual13/conv2/LeakyRelu, darknet/residual13/add), Conv__312, darknet/residual14/conv1/LeakyRelu, Conv__313, PWN(darknet/residual14/conv2/LeakyRelu, darknet/residual14/add), Conv__316, darknet/residual15/conv1/LeakyRelu, Conv__317, PWN(darknet/residual15/conv2/LeakyRelu, darknet/residual15/add), Conv__320, darknet/residual16/conv1/LeakyRelu, Conv__321, PWN(darknet/residual16/conv2/LeakyRelu, darknet/residual16/add), Conv__324, darknet/residual17/conv1/LeakyRelu, Conv__325, PWN(darknet/residual17/conv2/LeakyRelu, darknet/residual17/add), Conv__328, darknet/residual18/conv1/LeakyRelu, Conv__329, PWN(darknet/residual18/conv2/LeakyRelu, darknet/residual18/add), darknet/conv43/Conv2D, darknet/conv43/LeakyRelu, Conv__334, darknet/residual19/conv1/LeakyRelu, darknet/residual19/conv2/Conv2D, PWN(darknet/residual19/conv2/LeakyRelu, darknet/residual19/add), Conv__337, darknet/residual20/conv1/LeakyRelu, darknet/residual20/conv2/Conv2D, PWN(darknet/residual20/conv2/LeakyRelu, darknet/residual20/add), Conv__340, darknet/residual21/conv1/LeakyRelu, darknet/residual21/conv2/Conv2D, PWN(darknet/residual21/conv2/LeakyRelu, darknet/residual21/add), Conv__343, darknet/residual22/conv1/LeakyRelu, darknet/residual22/conv2/Conv2D, PWN(darknet/residual22/conv2/LeakyRelu, darknet/residual22/add), Conv__344, conv52/LeakyRelu, conv53/Conv2D, conv53/LeakyRelu, Conv__345, conv54/LeakyRelu, conv55/Conv2D, conv55/LeakyRelu, Conv__346, conv56/LeakyRelu, Conv__350, conv_lobj_branch/Conv2D, conv57/LeakyRelu, conv_lobj_branch/LeakyRelu, Conv__349, conv_lbbox/Conv2D__143 + pred_lbbox/reshape, pred_lbbox/strided_slice, pred_lbbox/Sigmoid, pred_lbbox/Reshape_1, pred_lbbox/strided_slice_1, pred_lbbox/Exp, pred_lbbox/Reshape_2, pred_lbbox/strided_slice_2, pred_lbbox/Sigmoid_1, pred_lbbox/strided_slice_3, pred_lbbox/Sigmoid_2, Resize__172, Resize__172:0 copy, Conv__351, conv58/LeakyRelu, Conv__352, (Unnamed Layer* 166) [Constant] + pred_lbbox/Add + (Unnamed Layer* 168) [Constant] + pred_lbbox/Mul, pred_lbbox/Reshape_4, (Unnamed Layer* 158) [Constant] + pred_lbbox/Mul_1 + (Unnamed Layer* 160) [Constant] + pred_lbbox/Mul_2, pred_lbbox/Reshape_5, pred_lbbox/Reshape_4:0 copy, pred_lbbox/Reshape_5:0 copy, pred_lbbox/concat:0 copy, pred_lbbox/S
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
ERROR: [TRT]: …/builder/cudnnBuilderUtils.cpp (427) - Cuda Error in findFastestTactic: 700 (an illegal memory access was encountered)
ERROR: [TRT]: Parameter check failed at: /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h::operator()::393, condition: CudaDeleterAPI(ptr) failure.
ERROR: [TRT]: …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

When I load onnxfile by tensorrt ;it works well.

How can I solved this problem

• Hardware Platform (Jetson / GPU)
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hi,

Not sure if this issue is related to the environment setting.
Would you mind to fill the complete device information so we can check it for you?

• Hardware Platform (Jetson / GPU)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Thanks.