How to integrate csi-camera with yolov5 model to run on Jetson Nano

I have been trying to run csi-camera to detect custom objects in real time using yolov5. I have read in a lot of previous related issues that it we can use this code from repo

But I am unable to figure out how to integrate this code with the yolov5 file.

Can someone please help me understand how to do this?

Here are my files: (14.0 KB) (54.4 KB)


You can find jetson inference or Deepstream to run TensorRT with a CSI camera.

Jetson inference: GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.


Hello, thank you for helping.
I went through the jetson inference repo, but I can’t seem to find the option for yolov5 model.
I am also encountering some errors while trying to accomplish my objective using deepstream.

Can you please suggest if I can use the first method I mentioned in my issue, to use the csi-camera code for the yolo model? If it is possible, can you please help me understand how can I do it?


Have you wrapped the CSI frame into a cvMat format?
If yes, you can follow the below sample to copy the buffer to GPU and then infer it with TensorRT.


No, I did not wrap the CSI frame into a cvMat format. I am unable to understand what that is and why or how should I do it. Can you kindly help me understand better?
Thank you


The frame variable here is the cvMat format already.
You can pass it to TensorRT like:

np.copyto(host_inputs[0], frame.ravel())


Hello @AastaLLL ,
Okay I understood from your suggested link where I can pass this frame. But in the link, the /usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/resnet50/ResNet50.onnx --saveEngine=sample.engine the instruction is for resnet-50. Can you please tell me how can I modify it for yolov5 weights?


TensorRT needs ONNX format so you will need to convert it to ONNX file first.

Is your model defined with .cfg and .weight, the darknet format?
If yes, there are some samples online to convert the model format, for example:


Hello @AastaLLL ,
I have converted my weights to ONNX format. But I am unable to figure how to run yolov5 model instead of Resnet50. As mentioned your suggested link, I tried to navigate to the /usr/src/tensorrt/data directory, to change the Resnet50 to Yolov5, but I couldn’t find yolov5 in the options. Please tell me how to run yolov5 like Resnet50 using this code /usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/resnet50/ResNet50.onnx --saveEngine=sample.engine.

UPDATE: I ran the code for building the engine using my yolov5 onnx weights. I have built a sample engine. Now when I try to run the code mentioned in your link, on a python terminal, I get this error for tesorrt.
Although I have duely followed all steps and installed tensorrt on my jetson nano.

I can’t seem to understand why tensorrt is not being recognised in the python environment.

Python - 3.9

Please help me out!

@AastaLLL Can you please look into the issue and help me out? Its really urgent for me to solve this!!


We only support the default python version. For JetPack4, it’s python 3.6.
For other python versions, you can build it from the source:

You can infer the model with the sample shared above.
For YOLOv5, you will need some post-processing to convert the output tensor into bounding boxes.
You can check the author’s code or some community implementation for more info.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.