tlt2 image nvcr.io/nvidia/tlt-streamanalytics v2.0_dp_py2
+
nvcr.io/nvidia/tritonserver 20.03-py3
mobilenet_v2_ssd trained , pruned and retrained, exported with fp16,
tlt_infer works on tlt2 docker but when placing in model_repository as tensorrt_plan getting error:
I0510 18:20:12.415923 1 server_status.cc:55] New status tracking for model ‘mobilenet_v2_ssd’
I0510 18:20:12.415982 1 model_repository_manager.cc:680] loading: mobilenet_v2_ssd:1
E0510 18:20:12.436904 1 logging.cc:43] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchTilePlugin_TRT version 1
E0510 18:20:12.436943 1 logging.cc:43] safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
E0510 18:20:12.441026 1 logging.cc:43] INVALID_STATE: std::exception
E0510 18:20:12.441126 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
Could be mismatching tensorrt versions?