Hello.
I am trying to use the our TAO converted trt UNET model into deep stream pipeline
gst-launch-1.0 filesrc location=02****.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinfer config-file-path=pgie_unet_tao_config.txt input-tensor-meta=1 batch-size=7 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! 'video/x-h264,stream-format=avc' ! matroskamux ! filesink location=gst_out.mkv
But while executing that within gstreamer pipeline i am getting error below
Setting pipeline to PAUSED ...
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::29] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::76] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew
0:00:01.483888692 14958 0x564bfea49a60 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew failed
0:00:01.483940342 14958 0x564bfea49a60 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew failed, try rebuild
0:00:01.483953638 14958 0x564bfea49a60 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
0:00:16.907711285 14958 0x564bfea49a60 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/elio_admin/deep/CA_CD.etlt_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x512x512
1 OUTPUT kFLOAT softmax_1 512x512x3
0:00:16.921437595 14958 0x564bfea49a60 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:pgie_unet_tao_config.txt sucessfully
Pipeline is PREROLLING ...
Warning: Color primaries 5 not present and will be treated BT.601
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:20.576415680
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Any help and suggestion would be appreciated
• Hardware Platform (DGPU)
Tesla T4
• DeepStream Version
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.0
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3