"Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match" Error While running converted TRT model

Hello.

I am trying to use the our TAO converted trt UNET model into deep stream pipeline

gst-launch-1.0 filesrc location=02****.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! nvinfer config-file-path=pgie_unet_tao_config.txt input-tensor-meta=1 batch-size=7  ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! 'video/x-h264,stream-format=avc' ! matroskamux ! filesink location=gst_out.mkv

But while executing that within gstreamer pipeline i am getting error below

Setting pipeline to PAUSED ...
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::29] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::76] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew
0:00:01.483888692 14958 0x564bfea49a60 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew failed
0:00:01.483940342 14958 0x564bfea49a60 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/elio_admin/deep/unet/CA_CD_ICG_100_2switched.trt_ew failed, try rebuild
0:00:01.483953638 14958 0x564bfea49a60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
0:00:16.907711285 14958 0x564bfea49a60 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/elio_admin/deep/CA_CD.etlt_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 111.0.1
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x512x512       
1   OUTPUT kFLOAT softmax_1       512x512x3       

0:00:16.921437595 14958 0x564bfea49a60 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:pgie_unet_tao_config.txt sucessfully
Pipeline is PREROLLING ...
Warning: Color primaries 5 not present and will be treated BT.601
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:20.576415680
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Any help and suggestion would be appreciated

• Hardware Platform (DGPU)

Tesla T4

• DeepStream Version

deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.0
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3
1 Like

Could you attach your models and config files?

[property]

labelfile-path=unet_labels.txt
#model-engine-file=Model.trt
engine-create-func-name=NvDsInferYoloCudaEngineGet


#tlt-encoded-model=Model.etlt

net-scale-factor=0.00784313725490196
offsets=127.5;127.5;127.5
infer-dims=3;512;512
#tlt-model-key=Model_Key
network-type=1
num-detected-classes=3
model-color-format=1
segmentation-threshold=0.0
output-blob-names=softmax_1
segmentation-output-order=1
gie-unique-id=1

pgie_unet_tao_config.txt (1.7 KB)
classification_0.trt (88.4 MB)

You can refer the topic below:
https://forums.developer.nvidia.com/t/error-trt-stdarchivereader-serialization-assertion/226692?u=morganh

Referred source seems not to be helping much could you pl. look into issue again

We are trying to use the application on the host machine which is an Ubuntu 18.04 dPGU A6000, T4 instead of an Docker. We have also matched the Version of TensorRT on both machine having similar version, Machine on the Tlt-export to convert model into Etlt and TensorRT was done was same as our Execution server having similar TensorRT version. We are Trying to use an TAO-Unet model with our custom training over data having 3 class detection. But we are still facing issues as mentioned above if any reference link to the similar kind of problem or anything around Unit could be share we can also have look at that also.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi @nitinp14920914
Please try to let deepstream generate the tensorrt engine directly based on the .etlt model.

Please follow

https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/unet_tao/pgie_unet_tao_config.txt

And please ignore line 31 and 32. Actually it can parse the etlt model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.