Error Converting model to tensor RT

Hello everyone,
I am facing memory errors while converting my model to tensor RT I have tensorflow 2.4 installed on my jetson Nano with cuda 10, RAM of 4GB and swap memory of 8gb available.
I am trying to convert custom yolo V3 to protect single class object detection model to tensor RT. Jetson Nano is running out of memory again and again the same the same script running on running on my PC with GPU and on Google colab also which was able to successfully convert yolo V3 to tensor RT. but as Nvidia documentation says that these model are hardware specific hands and cannot deploy those model on jetson Nano.
Here is the error

tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run Identity: Dst tensor is not initialized. [Op:Identity]

2021-03-15 15:02:30.619804: I tensorflow/core/common_runtime/bfc_allocator.cc:1051] Stats:
Limit: 198533120
InUse: 198533120
MaxInUse: 198533120
NumAllocs: 314
MaxAllocSize: 33554432
Reserved: 0
PeakReserved: 0
LargestFreeBlock: 0

Hi,

The log indicates that you are using TF-TRT rather than pure TensorRT.
Would you mind to give TensorRT API a try?

You can find an example for YOLOv3 in the below folder:

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/
/usr/src/tensorrt/samples/python/yolov3_onnx/

Thanks.

Hi, @AastaLLL
thanks for the reply i was successfully able to convert my darknet yolo to tensorrt and was able to predict once but on running inference again I am facing new error . please have a look