How to use inference engine model(made by TLT) in Xavier

Hi. Nvidia.
We used jetson xavier install by jetpack 4.2.1 with deepstream 4.0
and transfer learing toolkit installed on desktop(Titan XP).

1.We want to retrained yolov2_tiny inference engine in deepstream4.0 samples
it is correct that yolo model engine can be retrained through transfer learning toolkit?
path: deepstream4.0/sources/objetcDetector_Yolo/deepstream_app_config_yoloV2_tiny.txt

2.if it is. should i download model in NGC(AI Models - Computer Vision, Conversational AI, and More | NVIDIA NGC),and what kind of object detection backbone network?

3.We tried ‘detection.ipynb’ in TLT example and get resnet18_detector.etlt(or uff)
it is usable model engine in jetson xavier? if not, what is the file format can be running in xavier?
but resnet18_detector.etlt size(297.8kB) too small than model_b1_fp32.engine(126.4MB) in xavier.
maybe not the engine? confusing:(

Hi,

Sorry that TLT doesn’t support YOLO but following models:

https://developer.nvidia.com/transfer-learning-toolkit

Image Classification	Object Detection
ResNet18  	        ResNet50
ResNet50	        VGG16
VGG16	                GoogLeNet
VGG19	
AlexNet	
GoogLeNet

Since our YOLO sample use the author’s model without update, you can use the darknet frameworks to refine the model directly.
https://pjreddie.com/darknet/yolo/

Thanks.