Exception: jetson.inference -- detectNet failed to load network

Description

i think that tensorrt was updated and my builtin algo is not working properly as i did no change to either the code or the repos

i do not remember the exect previous version but the current one is 7.1.0
Please do let me knoew if i can rollback the version

here is the total log
hamza@hamza-desktop:~/Desktop/animal detection$ python3 animal.py
jetson.inference.init.py
jetson.inference – initializing Python 3.6 bindings…
jetson.inference – registering module types…
jetson.inference – done registering module types
jetson.inference – done Python 3.6 binding initialization
jetson.utils.init.py
jetson.utils – initializing Python 3.6 bindings…
jetson.utils – registering module functions…
jetson.utils – done registering module functions
jetson.utils – registering module types…
jetson.utils – done registering module types
jetson.utils – done Python 3.6 binding initialization
jetson.inference – PyTensorNet_New()
jetson.inference – PyDetectNet_Init()
jetson.inference – detectNet loading build-in network ‘ssd-mobilenet-v2’

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
          -- input_blob   'Input'
          -- output_blob  'NMS'
          -- output_count 'NMS_1'
          -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]   TensorRT version 7.1.0
[TRT]   loading NVIDIA plugins...
[TRT]   Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]   Registered plugin creator - ::NMS_TRT version 1
[TRT]   Registered plugin creator - ::Reorg_TRT version 1
[TRT]   Registered plugin creator - ::Region_TRT version 1
[TRT]   Registered plugin creator - ::Clip_TRT version 1
[TRT]   Registered plugin creator - ::LReLU_TRT version 1
[TRT]   Registered plugin creator - ::PriorBox_TRT version 1
[TRT]   Registered plugin creator - ::Normalize_TRT version 1
[TRT]   Registered plugin creator - ::RPROI_TRT version 1
[TRT]   Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]   Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]   Registered plugin creator - ::CropAndResize version 1
[TRT]   Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]   Registered plugin creator - ::Proposal version 1
[TRT]   Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]   Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]   Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]   Registered plugin creator - ::Split version 1
[TRT]   Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]   Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - UFF  (extension '.uff')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7100.GPU.FP16.engine
[TRT]   loading network profile from engine cache... /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7100.GPU.FP16.engine
[TRT]   device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff loaded
[TRT]   coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TRT]   INVALID_STATE: std::exception
[TRT]   INVALID_CONFIG: Deserialize the cuda engine failed.
[TRT]   device GPU, failed to create CUDA engine
detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load built-in network 'ssd-mobilenet-v2'
PyTensorNet_Dealloc()
Traceback (most recent call last):
  File "animal.py", line 4, in <module>
    net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
Exception: jetson.inference -- detectNet failed to load network

Hi,

coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)

The error occurs since the serialized TensoRT engine is generated by the different version.

If there is a ssd_mobilenet_v2_coco.uff model, please delete the ssd_mobilenet_v2_coco.uff.1.1.7100.GPU.FP16.engine file first.
And re-generate a new engine from .uff with the v7.1.0 TensorRT API.
The engine file generation will be automatically triggered if no .engine exist.

Thanks.

H, am having the same issue, could you teach me how to regenerate a new engine?

Hi @cespedesk, if you delete the *.engine file from the model’s folder (these are typically stored under jetson-inference/data/networks), and then re-run the program it will re-generate the TensorRT engine for you.