• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6
I have trained a pytorch model, best.pt. For I used GhostConv & DWConv etc. layers,
it’s difficult to convert to right .cfg and .wts via gen_wts_yoloV5.py in Deepstream-Ylo
or in tensorrtx. So I exported to onnx in yolov5, then use trtexec to produce the .engine
file.
Now I want load the .engine file directly from deepstream without .cfg and .wts, but not succeeded.
deepstream-app -c deepstream_app_config.txt
Using winsys: x11
0:00:05.465532190 21034 0xf7b1f50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/./best_exp.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kHALF images 3x640x640
1 OUTPUT kHALF 490 3x80x80x6
2 OUTPUT kHALF 558 3x40x40x6
3 OUTPUT kHALF 626 3x20x20x6
4 OUTPUT kFLOAT output 25200x6
0:00:05.465893821 21034 0xf7b1f50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/./best_exp.engine
0:00:05.475766009 21034 0xf7b1f50 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
** INFO: <bus_callback:180>: Pipeline running
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
deepstream-app: nvdsparsebbox_Yolo.cpp:137: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3’ failed.
Aborted (core dumped)
deepstream_app_config.txt:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[tests]
file-loop=0
config_infer_primary_yoloV5.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov5n.cfg
model-file=yolov5n.wts
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
How to fix this error? Or, is it possible to run without .cfg and .wts?