Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Xavier NX
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.5
• TensorRT Version
7.1.3
• Issue Type( questions, new requirements, bugs)
question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
No Idea
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I did transfer learning for YOLO_V4using the tao toolkit and obtained two files: trt.engine and yolov4_resnet18_epoch_080.etlt. I did installed the tensor OSS using the deepstream_tlt_apps got repository and its custom plugins, git-lfs. the following is my main config file:
[primary-gie]
enable=1
#gpu-id=0
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
#model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
#labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/DLModel/yolo_v4_config_file.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
and the following is the properties:
[property]
gpu-id=0
# preprocessing parameters.
net-scale-factor=1
model-color-format=1
# model paths.
#int8-calib-file=<Path to optional INT8 calibration cache>
labelfile-path=./labels.txt
tlt-encoded-model=./resnet10_detector_250Ep.etlt
model-engine-file=./trt.engine
tlt-model-key=nvidia_tao
input-dims=3;384;1248;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
infer-dimes=3,544,900
uff-input-blob-name=Input
uff-input-order=0
batch_size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=6
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/rudi/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so
cluster-mode=3
#enable_dbscan=0
[class-attrs-all]
threshold=0.3
group-threshold=1
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
running the app I am receiving the following errors
sudo /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/deepstream-test5-app -c /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/configs/test5_config_file_src_infer_aws_til_yolov4.txt -t --tiledtext
Data read from memory: 0
Data read from memory: 0
Data read from memory: 0
*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Unknown or legacy key specified 'infer-dimes' for group [property]
Unknown or legacy key specified 'batch_size' for group [property]
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: [TRT]: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/DLModel/trt.engine
0:00:01.668797625 12142 0x559dac2c60 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/DLModel/trt.engine failed
0:00:01.668897785 12142 0x559dac2c60 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5-5.1/DLModel/trt.engine failed, try rebuild
0:00:01.668938170 12142 0x559dac2c60 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.786340602 12142 0x559dac2c60 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault