Can't recognize anthying whit my engine

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) jetson nx
• DeepStream Version6.0.1
**• JetPack Version (valid for Jetson only)**4.6.2
• TensorRT Version8.2.1.8
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)**nothing to recognized
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating H264 Encoder
Creating H264 rtppay
Playing file /dli/task/my_apps/my_pigs/my_pigs.h264
Adding elements to Pipeline

Linking elements in the Pipeline

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Starting pipeline

Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:00.427677511 1699 0x3ffa7a40 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:05.302743745 1699 0x3ffa7a40 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/my_apps/my_pigs/my_change_pig.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 25200x6

0:00:05.317807965 1699 0x3ffa7a40 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/my_apps/my_pigs/my_change_pig.engine
0:00:05.326425742 1699 0x3ffa7a40 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Frame Number=0 Number of Objects=0 pig_count=0
H264: Profile = 66, Level = 0
Frame Number=1 Number of Objects=0 pig_count=0
NVMEDIA_ENC: bBlitMode is set to TRUE
Frame Number=2 Number of Objects=0 pig_count=0

Frame Number=338 Number of Objects=0 pig_count=0
End-of-stream
deepstream_test1_rtsp_out.py (12.4 KB)
dstest1_pgie_config.txt (3.7 KB)
labels.txt (6 Bytes)

the engine file is translated use trtexec in jetson nx
no errors, no warnings

why no object be recognized or detected?

what is your model used to do? is it yolov5? please make sure you can model can work, you can use third part tool to verify.

when ran this command “./trtexec --onnx=/home/jetson/my_apps/my_pigs/best.onnx”

output message like this:

[12/06/2022-19:43:03] [I] === Model Options ===
[12/06/2022-19:43:03] [I] Format: ONNX
[12/06/2022-19:43:03] [I] Model: /home/jetson/my_apps/my_pigs/best.onnx
[12/06/2022-19:43:03] [I] Output:
[12/06/2022-19:43:03] [I] === Build Options ===
[12/06/2022-19:43:03] [I] Max batch: explicit batch
[12/06/2022-19:43:03] [I] Workspace: 16 MiB
[12/06/2022-19:43:03] [I] minTiming: 1
[12/06/2022-19:43:03] [I] avgTiming: 8
[12/06/2022-19:43:03] [I] Precision: FP32
[12/06/2022-19:43:03] [I] Calibration:
[12/06/2022-19:43:03] [I] Refit: Disabled
[12/06/2022-19:43:03] [I] Sparsity: Disabled
[12/06/2022-19:43:03] [I] Safe mode: Disabled
[12/06/2022-19:43:03] [I] DirectIO mode: Disabled
[12/06/2022-19:43:03] [I] Restricted mode: Disabled
[12/06/2022-19:43:03] [I] Save engine:
[12/06/2022-19:43:03] [I] Load engine:
[12/06/2022-19:43:03] [I] Profiling verbosity: 0
[12/06/2022-19:43:03] [I] Tactic sources: Using default tactic sources
[12/06/2022-19:43:03] [I] timingCacheMode: local
[12/06/2022-19:43:03] [I] timingCacheFile:
[12/06/2022-19:43:03] [I] Input(s)s format: fp32:CHW
[12/06/2022-19:43:03] [I] Output(s)s format: fp32:CHW
[12/06/2022-19:43:03] [I] Input build shapes: model
[12/06/2022-19:43:03] [I] Input calibration shapes: model
[12/06/2022-19:43:03] [I] === System Options ===
[12/06/2022-19:43:03] [I] Device: 0
[12/06/2022-19:43:03] [I] DLACore:
[12/06/2022-19:43:03] [I] Plugins:
[12/06/2022-19:43:03] [I] === Inference Options ===
[12/06/2022-19:43:03] [I] Batch: Explicit
[12/06/2022-19:43:03] [I] Input inference shapes: model
[12/06/2022-19:43:03] [I] Iterations: 10
[12/06/2022-19:43:03] [I] Duration: 3s (+ 200ms warm up)
[12/06/2022-19:43:03] [I] Sleep time: 0ms
[12/06/2022-19:43:03] [I] Idle time: 0ms
[12/06/2022-19:43:03] [I] Streams: 1
[12/06/2022-19:43:03] [I] ExposeDMA: Disabled
[12/06/2022-19:43:03] [I] Data transfers: Enabled
[12/06/2022-19:43:03] [I] Spin-wait: Disabled
[12/06/2022-19:43:03] [I] Multithreading: Disabled
[12/06/2022-19:43:03] [I] CUDA Graph: Disabled
[12/06/2022-19:43:03] [I] Separate profiling: Disabled
[12/06/2022-19:43:03] [I] Time Deserialize: Disabled
[12/06/2022-19:43:03] [I] Time Refit: Disabled
[12/06/2022-19:43:03] [I] Skip inference: Disabled
[12/06/2022-19:43:03] [I] Inputs:
[12/06/2022-19:43:03] [I] === Reporting Options ===
[12/06/2022-19:43:03] [I] Verbose: Disabled
[12/06/2022-19:43:03] [I] Averages: 10 inferences
[12/06/2022-19:43:03] [I] Percentile: 99
[12/06/2022-19:43:03] [I] Dump refittable layers:Disabled
[12/06/2022-19:43:03] [I] Dump output: Disabled
[12/06/2022-19:43:03] [I] Profile: Disabled
[12/06/2022-19:43:03] [I] Export timing to JSON file:
[12/06/2022-19:43:03] [I] Export output to JSON file:
[12/06/2022-19:43:03] [I] Export profile to JSON file:
[12/06/2022-19:43:03] [I]
[12/06/2022-19:43:03] [I] === Device Information ===
[12/06/2022-19:43:03] [I] Selected Device: Xavier
[12/06/2022-19:43:03] [I] Compute Capability: 7.2
[12/06/2022-19:43:03] [I] SMs: 6
[12/06/2022-19:43:03] [I] Compute Clock Rate: 1.109 GHz
[12/06/2022-19:43:03] [I] Device Global Memory: 7767 MiB
[12/06/2022-19:43:03] [I] Shared Memory per SM: 96 KiB
[12/06/2022-19:43:03] [I] Memory Bus Width: 256 bits (ECC disabled)
[12/06/2022-19:43:03] [I] Memory Clock Rate: 1.109 GHz
[12/06/2022-19:43:03] [I]
[12/06/2022-19:43:03] [I] TensorRT version: 8.2.1
[12/06/2022-19:43:05] [I] [TRT] [MemUsageChange] Init CUDA: CPU +362, GPU +0, now: CPU 381, GPU 4261 (MiB)
[12/06/2022-19:43:05] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 381 MiB, GPU 4261 MiB
[12/06/2022-19:43:06] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 486 MiB, GPU 4367 MiB
[12/06/2022-19:43:06] [I] Start parsing network model
[12/06/2022-19:43:06] [I] [TRT] ----------------------------------------------------------------
[12/06/2022-19:43:06] [I] [TRT] Input filename: /home/jetson/my_apps/my_pigs/best.onnx
[12/06/2022-19:43:06] [I] [TRT] ONNX IR version: 0.0.7
[12/06/2022-19:43:06] [I] [TRT] Opset version: 12
[12/06/2022-19:43:06] [I] [TRT] Producer name: pytorch
[12/06/2022-19:43:06] [I] [TRT] Producer version: 1.13.0
[12/06/2022-19:43:06] [I] [TRT] Domain:
[12/06/2022-19:43:06] [I] [TRT] Model version: 0
[12/06/2022-19:43:06] [I] [TRT] Doc string:
[12/06/2022-19:43:06] [I] [TRT] ----------------------------------------------------------------
[12/06/2022-19:43:06] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/06/2022-19:43:06] [I] Finish parsing network model
[12/06/2022-19:43:06] [I] [TRT] ---------- Layers Running on DLA ----------
[12/06/2022-19:43:06] [I] [TRT] ---------- Layers Running on GPU ----------
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.0/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.0/act/Sigmoid), /model.0/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.1/act/Sigmoid), /model.1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.2/cv1/conv/Conv || /model.2/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.2/cv1/act/Sigmoid), /model.2/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.2/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.2/m/m.0/cv1/act/Sigmoid), /model.2/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.2/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.2/m/m.0/cv2/act/Sigmoid), /model.2/m/m.0/cv2/act/Mul), /model.2/m/m.0/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.2/cv2/act/Sigmoid), /model.2/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.2/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.2/cv3/act/Sigmoid), /model.2/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.3/act/Sigmoid), /model.3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/cv1/conv/Conv || /model.4/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.4/cv1/act/Sigmoid), /model.4/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.4/m/m.0/cv1/act/Sigmoid), /model.4/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.4/m/m.0/cv2/act/Sigmoid), /model.4/m/m.0/cv2/act/Mul), /model.4/m/m.0/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/m/m.1/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.4/m/m.1/cv1/act/Sigmoid), /model.4/m/m.1/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/m/m.1/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.4/m/m.1/cv2/act/Sigmoid), /model.4/m/m.1/cv2/act/Mul), /model.4/m/m.1/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.4/cv2/act/Sigmoid), /model.4/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.4/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.4/cv3/act/Sigmoid), /model.4/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.5/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.5/act/Sigmoid), /model.5/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/cv1/conv/Conv || /model.6/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/cv1/act/Sigmoid), /model.6/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/m/m.0/cv1/act/Sigmoid), /model.6/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.6/m/m.0/cv2/act/Sigmoid), /model.6/m/m.0/cv2/act/Mul), /model.6/m/m.0/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.1/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/m/m.1/cv1/act/Sigmoid), /model.6/m/m.1/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.1/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.6/m/m.1/cv2/act/Sigmoid), /model.6/m/m.1/cv2/act/Mul), /model.6/m/m.1/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.2/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/m/m.2/cv1/act/Sigmoid), /model.6/m/m.2/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/m/m.2/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.6/m/m.2/cv2/act/Sigmoid), /model.6/m/m.2/cv2/act/Mul), /model.6/m/m.2/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/cv2/act/Sigmoid), /model.6/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.6/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.6/cv3/act/Sigmoid), /model.6/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.7/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.7/act/Sigmoid), /model.7/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.8/cv1/conv/Conv || /model.8/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.8/cv1/act/Sigmoid), /model.8/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.8/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.8/m/m.0/cv1/act/Sigmoid), /model.8/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.8/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(PWN(/model.8/m/m.0/cv2/act/Sigmoid), /model.8/m/m.0/cv2/act/Mul), /model.8/m/m.0/Add)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.8/cv2/act/Sigmoid), /model.8/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.8/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.8/cv3/act/Sigmoid), /model.8/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.9/cv1/act/Sigmoid), /model.9/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m/MaxPool
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m_1/MaxPool
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m_2/MaxPool
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/cv1/act/Mul_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m/MaxPool_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m_1/MaxPool_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/m_2/MaxPool_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.9/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.9/cv2/act/Sigmoid), /model.9/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.10/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.10/act/Sigmoid), /model.10/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.11/Resize
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.11/Resize_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.13/cv1/conv/Conv || /model.13/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.13/cv1/act/Sigmoid), /model.13/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.13/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.13/m/m.0/cv1/act/Sigmoid), /model.13/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.13/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.13/m/m.0/cv2/act/Sigmoid), /model.13/m/m.0/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.13/cv2/act/Sigmoid), /model.13/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.13/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.13/cv3/act/Sigmoid), /model.13/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.14/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.14/act/Sigmoid), /model.14/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.15/Resize
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.15/Resize_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.17/cv1/conv/Conv || /model.17/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.17/cv1/act/Sigmoid), /model.17/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.17/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.17/m/m.0/cv1/act/Sigmoid), /model.17/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.17/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.17/m/m.0/cv2/act/Sigmoid), /model.17/m/m.0/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.17/cv2/act/Sigmoid), /model.17/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.17/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.17/cv3/act/Sigmoid), /model.17/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.18/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.18/act/Sigmoid), /model.18/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.14/act/Mul_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.20/cv1/conv/Conv || /model.20/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.20/cv1/act/Sigmoid), /model.20/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.20/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.20/m/m.0/cv1/act/Sigmoid), /model.20/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.20/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.20/m/m.0/cv2/act/Sigmoid), /model.20/m/m.0/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.20/cv2/act/Sigmoid), /model.20/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.20/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.20/cv3/act/Sigmoid), /model.20/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.21/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.21/act/Sigmoid), /model.21/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.10/act/Mul_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.23/cv1/conv/Conv || /model.23/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.23/cv1/act/Sigmoid), /model.23/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.23/m/m.0/cv1/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.23/m/m.0/cv1/act/Sigmoid), /model.23/m/m.0/cv1/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.23/m/m.0/cv2/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.23/m/m.0/cv2/act/Sigmoid), /model.23/m/m.0/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.23/cv2/act/Sigmoid), /model.23/cv2/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.23/cv3/conv/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.23/cv3/act/Sigmoid), /model.23/cv3/act/Mul)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/m.0/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape + /model.24/Transpose
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(/model.24/Sigmoid)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_1
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_2_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_1_output_0 + (Unnamed Layer* 204) [Shuffle] + /model.24/Mul, /model.24/Add), /model.24/Constant_3_output_0 + (Unnamed Layer* 209) [Shuffle] + /model.24/Mul_1)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_6_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_5_output_0 + (Unnamed Layer* 215) [Shuffle], PWN(/model.24/Constant_4_output_0 + (Unnamed Layer* 212) [Shuffle] + /model.24/Mul_2, /model.24/Pow)), /model.24/Mul_3)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_1_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_3_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_output_2 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_1
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/m.1/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_2 + /model.24/Transpose_1
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(/model.24/Sigmoid_1)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_1_2
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_1_3
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_1_4
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_10_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_9_output_0 + (Unnamed Layer* 229) [Shuffle] + /model.24/Mul_4, /model.24/Add_1), /model.24/Constant_11_output_0 + (Unnamed Layer* 234) [Shuffle] + /model.24/Mul_5)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_14_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_13_output_0 + (Unnamed Layer* 240) [Shuffle], PWN(/model.24/Constant_12_output_0 + (Unnamed Layer* 237) [Shuffle] + /model.24/Mul_6, /model.24/Pow_1)), /model.24/Mul_7)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_5_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_7_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_1_output_2 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_3
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/m.2/Conv
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_4 + /model.24/Transpose_2
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(/model.24/Sigmoid_2)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_2
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_2_5
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_2_6
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_18_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_17_output_0 + (Unnamed Layer* 254) [Shuffle] + /model.24/Mul_8, /model.24/Add_2), /model.24/Constant_19_output_0 + (Unnamed Layer* 259) [Shuffle] + /model.24/Mul_9)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Constant_22_output_0
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] PWN(PWN(/model.24/Constant_21_output_0 + (Unnamed Layer* 265) [Shuffle], PWN(/model.24/Constant_20_output_0 + (Unnamed Layer* 262) [Shuffle] + /model.24/Mul_10, /model.24/Pow_2)), /model.24/Mul_11)
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_9_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Mul_11_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Split_2_output_2 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_5
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_1_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_3_output_0 copy
[12/06/2022-19:43:06] [I] [TRT] [GpuLayer] /model.24/Reshape_5_output_0 copy
[12/06/2022-19:43:07] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +204, now: CPU 744, GPU 4656 (MiB)
[12/06/2022-19:43:09] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +307, GPU +308, now: CPU 1051, GPU 4964 (MiB)
[12/06/2022-19:43:09] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[12/06/2022-19:43:32] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[12/06/2022-19:50:54] [I] [TRT] Detected 1 inputs and 4 output network tensors.
[12/06/2022-19:50:54] [I] [TRT] Total Host Persistent Memory: 149104
[12/06/2022-19:50:54] [I] [TRT] Total Device Persistent Memory: 37168640
[12/06/2022-19:50:54] [I] [TRT] Total Scratch Memory: 0
[12/06/2022-19:50:54] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 7 MiB, GPU 105 MiB
[12/06/2022-19:50:54] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 52.9292ms to assign 9 blocks to 135 nodes requiring 35635202 bytes.
[12/06/2022-19:50:54] [I] [TRT] Total Activation Memory: 35635202
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1523, GPU 5952 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1523, GPU 5952 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +3, GPU +64, now: CPU 3, GPU 64 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1555, GPU 5996 (MiB)
[12/06/2022-19:50:54] [I] [TRT] Loaded engine size: 43 MiB
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1563, GPU 5996 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1563, GPU 5996 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +42, now: CPU 0, GPU 42 (MiB)
[12/06/2022-19:50:54] [I] Engine built in 470.922 sec.
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1386, GPU 5953 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 1387, GPU 5953 (MiB)
[12/06/2022-19:50:54] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +70, now: CPU 0, GPU 112 (MiB)
[12/06/2022-19:50:54] [I] Using random values for input images
[12/06/2022-19:50:54] [I] Created input binding for images with dimensions 1x3x640x640
[12/06/2022-19:50:54] [I] Using random values for output output0
[12/06/2022-19:50:54] [I] Created output binding for output0 with dimensions 1x25200x6
[12/06/2022-19:50:54] [I] Starting inference
[12/06/2022-19:50:58] [I] Warmup completed 4 queries over 200 ms
[12/06/2022-19:50:58] [I] Timing trace has 60 queries over 3.05645 s
[12/06/2022-19:50:58] [I]
[12/06/2022-19:50:58] [I] === Trace details ===
[12/06/2022-19:50:58] [I] Trace averages of 10 runs:
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.612 ms - Host latency: 51.0146 ms (end to end 51.0242 ms, enqueue 3.02969 ms)
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.5397 ms - Host latency: 50.9419 ms (end to end 50.9501 ms, enqueue 2.99625 ms)
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.4972 ms - Host latency: 50.8993 ms (end to end 50.9079 ms, enqueue 2.81522 ms)
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.3701 ms - Host latency: 50.7717 ms (end to end 50.7817 ms, enqueue 2.67208 ms)
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.8235 ms - Host latency: 51.2253 ms (end to end 51.2347 ms, enqueue 2.71804 ms)
[12/06/2022-19:50:58] [I] Average on 10 runs - GPU latency: 50.3218 ms - Host latency: 50.7272 ms (end to end 50.7374 ms, enqueue 2.70623 ms)
[12/06/2022-19:50:58] [I]
[12/06/2022-19:50:58] [I] === Performance summary ===
[12/06/2022-19:50:58] [I] Throughput: 19.6306 qps
[12/06/2022-19:50:58] [I] Latency: min = 50.5018 ms, max = 53.2183 ms, mean = 50.93 ms, median = 50.8052 ms, percentile(99%) = 53.2183 ms
[12/06/2022-19:50:58] [I] End-to-End Host Latency: min = 50.5098 ms, max = 53.2253 ms, mean = 50.9393 ms, median = 50.8172 ms, percentile(99%) = 53.2253 ms
[12/06/2022-19:50:58] [I] Enqueue Time: min = 2.56158 ms, max = 3.26117 ms, mean = 2.82292 ms, median = 2.76025 ms, percentile(99%) = 3.26117 ms
[12/06/2022-19:50:58] [I] H2D Latency: min = 0.346558 ms, max = 0.391602 ms, mean = 0.348452 ms, median = 0.347412 ms, percentile(99%) = 0.391602 ms
[12/06/2022-19:50:58] [I] GPU Compute Time: min = 50.0991 ms, max = 52.8159 ms, mean = 50.5274 ms, median = 50.4038 ms, percentile(99%) = 52.8159 ms
[12/06/2022-19:50:58] [I] D2H Latency: min = 0.0483398 ms, max = 0.0568542 ms, mean = 0.0541463 ms, median = 0.0545349 ms, percentile(99%) = 0.0568542 ms
[12/06/2022-19:50:58] [I] Total Host Walltime: 3.05645 s
[12/06/2022-19:50:58] [I] Total GPU Compute Time: 3.03164 s
[12/06/2022-19:50:58] [I] Explanations of the performance metrics are printed in the verbose logs.

  1. what is your model used to do? is it yolov5? did your model work ok using third part tool?
  2. if not yolov5 mode, you should not use parse-bbox-func-name=NvDsInferParseCustomYoloV5.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.