Please provide the following information when requesting support.
• Hardware Nano
• Network Type Yolo_v3
@ Morganh
The last time you compiled the Yolo engine with the TLT command, you can use it. Now the same command reports an error. Why?
./tlt-converter -k tlt_encode -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt
[ERROR] Number of optimization profiles does not match model input node number.
Aborted
or
./tlt-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 ./models/yolo3/yolov3_resnet18.etlt
[ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin BatchedNMSDynamic_TRT version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[INFO] Detected input dimensions from the model: (-1, 3, 544, 960)
[INFO] Model has dynamic shape. Setting up optimization profiles.
[INFO] Using optimization profile min shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile opt shape: (1, 3, 544, 960) for input: Input
[INFO] Using optimization profile max shape: (2, 3, 544, 960) for input: Input
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
or I reinstall the plug-in and still report an error, as shown below:
./tlt-converter -k nvidia_tlt -d 3,544,960 -p Input,1x3x544x960,1x3x544x960,2x3x544x960 -t fp16 models/yolo3/yolov3_resnet18.etlt
[ERROR] /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
[ERROR] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)