How to custom YoloV3-tiny in DeepStream Python Apps(Version 5.0) on Jetson TX2?

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version unknown
• NVIDIA GPU Driver Version (valid for GPU only) unknown(newest?)

Hello Nvidia Developer
I am trying to custom YoloV3-tiny in DeepStream Python Apps on Jetson TX2.
I downloaded the Python apps on the Jetson download center and success to run the USB demo with caffe model on my TX2.
But to compatible with previous projects, I have to use YoloV3-tiny model to do some target detection.
So I want to known how to use yolo model in the deepstream python apps.
I try to copy
** 1. deepstream/deepstream-5.0/sources/objectDetector_Yolo/config_infer_primary_yoloV3_tiny.txt
** 2.yolov3-tiny.cfg
** 3.yolov3-tiny.weights
to my work dictionary(the dectionary that can run the official demo).

Then I try to run deepstream_test_1_usb_test.py, the terminal will tell me that.
Using winsys: x11
ERROR: Could not open lib: /home/ms00a1/文档/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1-usbcam/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so, error string: /home/ms00a1/文档/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1-usbcam/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so: cannot open shared object file: No such file or directory
0:00:00.383292347 14738 0x1580fe90 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1015> [UID = 1]: Could not open custom lib: (null)
0:00:00.383342683 14738 0x1580fe90 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:00.383362939 14738 0x1580fe90 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV3_tiny.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:

Bellow are my “config_infer_primary_yoloV3_tiny.txt”
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3-tiny.cfg
model-file=yolov3-tiny.weights
#model-engine-file=yolov3-tiny_b1_gpu0_fp32.engine
labelfile-path=labels.txt
#0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=80
gie-unique-id=1
network-type=0
is-classifier=0
#0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3Tiny
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
nms-iou-threshold=0.3
threshold=0.7

How to get the “libnvdsinfer_custom_impl_Yolo.so”?
Did I compile it myself or download it in Nvidia?
And how to compile it?
Forgive my bad Engilsh.
I sincerely want your help!

Please read the README under objectDetector_Yolo/

--------------------------------------------------------------------------------
Compile the custom library:
** # Based on the API to use ‘NvDsInferCreateModelParser’ or ‘NvDsInferCudaEngineGet’**
** # set the macro USE_CUDA_ENGINE_GET_API to 0 or 1 in**
** # nvdsinfer_custom_impl_Yolo/nvdsinfer_yolo_engine.cpp**

** # Export correct CUDA version (e.g. 10.2, 10.1)**
** export CUDA_VER=10.2** ** make -C nvdsinfer_custom_impl_Yolo**