Yolov7 Engine file detection not working

I have trained yolov7 tiny model. I have tested it and its running fine and detecting all the objects. I have now converted best.pt file into best.onnx file. I am running the onnx file using below code, it generates .engine file and loads up the video fine.

import jetson.inference
import jetson.utils
from jetson_utils import cudaAllocMapped

net = jetson.inference.detectNet(model="/home/andrew/Documents/Test/best.onnx", 
				 labels="/home/andrew/Documents/Test/labels.txt", input_blob="images", output_cvg="num_dets", 
				 output_bbox="det_boxes", threshold=0.2)

camera = jetson.utils.videoSource("/home/andrew/Documents/video/D2.mp4") 
display = jetson.utils.videoOutput("display://0") 

while display.IsStreaming():
	img = camera.Capture()
	print("Original size {} {}".format(img.width, img.height))
	frame = cudaAllocMapped(width=img.width * 0.333, height=img.height * 0.444, format=img.format)
	cudaResize(img, frame)
	print(frame.width, frame.height)
	detections = net.Detect(frame)
	display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))

Everything is working fine. There is no error shown while building the engine file and at the end it also loads up the video file. But there is no detection happening. It doesn’t draw any bbox or show any object name. The above code is taken from jetson-inference/detectnet-example-2.md at master · dusty-nv/jetson-inference · GitHub

We have only updated the output_cvg. Is there anything I am missing. Please help. Thanks

Sorry for the late response, have you managed to get issue resolved or still need the support? Thanks


Is any difference in the image preprocessing?
For example, is there any normalization or mean subtraction applied with the PyTorch?


Hi @kayccc

No I still need support.


Hi @AastaLLL

I am unable to understand. I have not done anything. To keep things simple, I have converted yolov7-tiny.pt to .onnx and then using it. Unfortunately there is no inferencing or detection.



Would you mind sharing a script that can output PyTorch and TensorRT inference results?

The common cause is that if an input image is read with torchvision, it might contain some predefined preprocessing.
If a model expects a normalized input, please try the same before feeding it to TensorRT.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.