Please provide the following information when requesting support.
• Hardware Jetson TX1
• Jetpack 4.4
• TensorRT 7.X
• Network Type (Yolo_v4) Object Detection
• TLT Version (3.21.08)
• Training spec file(If have, please share here)
• I want to inference of tao generated model in python without using deepstream is it possible
It is possible. You can directly deploy the etlt model with triton-app. GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton
You can also refer to the preprocessing and postprocessing in
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/entrypoints/tao_client.py and
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/postprocessing/yolov3_postprocessor.py
More, you can also search and refer to some topics in forum.
For example,
@Morganh
Actually my model input size is 1472X960. So if resize the image without changing aspect ratio the resized image size is 1472X828. then how can i feed this image to inference.
Here it is running trtexec like this:
/usr/src/tensorrt/bin/trtexec --loadEngine=trt.engine --verbose
Results are:
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --loadEngine=trt.engine --verbose
[11/01/2021-09:15:32] [I] === Model Options ===
[11/01/2021-09:15:32] [I] Format: *
[11/01/2021-09:15:32] [I] Model:
[11/01/2021-09:15:32] [I] Output:
[11/01/2021-09:15:32] [I] === Build Options ===
[11/01/2021-09:15:32] [I] Max batch: 1
[11/01/2021-09:15:32] [I] Works…
Bounding box indices[ [x1,y1,x2,y2],[…]…] of the detections.
Here is the full script (it’s quite basic):
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
trt.init_libnvinfer_plugins(TRT_LOGGER,'')
DTYPE_TRT = trt.float32
import pycuda.driver as cuda
import pycuda.autoinit
from PIL import Image
import numpy as np
path_img = "image.jpg"
offsets = ( 103.939, 116.779, 123.68 )
yolo_reso = (3, 768, 1024)
# Simple helper data class that's a little nicer to use than a 2-tuple
# from TRT Python sample code
class HostDeviceMem(object):
de…
Ok, with your recomendations i found a workinkg example of inferring with yolo v4. But i still have some issues:
The model im using is custom YOLO v4 trained with our own dataset. With the example of TLT (tlt_vc_samples_v1.1.0/yolo_v4/yolo_v4.ipynb). It is trained for Person, Car and Two_wheels.
Model: trt2-yolo.engine - Google Drive
For exporting the model we use:
!tlt yolo_v4 export -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/yolov4_resnet18_epoch_080.tlt \
…
system
Closed
February 18, 2022, 6:59am
3
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.