Can i use pytorch 1.6 with a trained yolo3 to use at deepstream5.1?

i have a ready yolo-v3 model now, i want to deloy it at deepstream 5.1 of jetson NX, and my pytorch is 1.6, is it ok? urgently to ask help!
or teach me a possible patch, many many thanks!


It should work.

We have a YOLOv3 model that deals with darknet format in the deepstream sample directly:


However, since you have a PyTorch-based model.
Please refer the following repository to convert the .pth → .onnx → .plan.


many many thanks for your intruction, and i will try quickly.
and if our own yolo3 model is ready which we trained at server, and it works well on NX alonely, but when we convert it to ONNX for deployment, it shows like this:

how can i do?



Could you use model.eval() to set the eval mode before exporting to see if works?

It’s still in the stage of setting up the environment, it seems to be a problem of opentv path, i use opencv 4.1.1 which loaded in jetpack and changed nothing. and run darknet but can not make the makefile.


i guess This should be a very simple setup or config question, but I just took over the NX boards…


Sometime software cross different platform may meet some compatibility issue.

If you can convert the model into ONNX on your server, you can just do so for simplicity.
Then copy the file into Jetson and use it with Deepstream directly.


yes you are right. i think my project is not urgent for opencv, so i am trying to close that to go on next step because we get RTSP stream by 265 from camera directly!
i am the 1st time to touch jetson and time is very urgent for a very big project, not only study or play…
i updates my jetson mirror again, and goes smoothly untill now after 7 days and nights.
it is lucky we trained yolo model by ubuntu and GPU version, and we can run our model very well at jetson NX by 1080p, but runs alone, and not 4K, and not in deepstream still.
i finished all environment and i start to put it in deepstream tonight, i would like to ask, if my model runs very well alone already on jetson NX , should i must convert PT to ONNX? or i use custom model with my weights only?
i do like a very simply way for my project and show to my customers quickly.

thanks a lots!


Do you run the YOLO model with PyTorch GPU mode?
To deploy a model with Deepstream, it will inference your model with TensorRT.

Although both TensorRT and PyTorch mainly use cuDNN backend.
TensorRT has optimized for Jetson GPU and can give you a better performance.

You can check the inference time in our benchmark table to see the expected performance:


thank you and i am trying jetson inference from yesterday.
it seems it is more good than ours own.
i should start it firstly then others.

thank you

Good to know this.