I am trying to use custom YoloV3 weights as a model for DeepStream with Python Bindings. I am working with the “deepstream-test1-usbcam”, which works fine using all preset settings (I believe the original weights include Person, Bicycle, RoadSign, and a other thing which I forget). So within the pgie file, I changed where it says “model-file=…” to point to my custom .weight file, and changed the number of detected classes to 1 (the custom model only has 1). When trying to run this, I get some “CaffeParser” errors which I cannot find much information on. I found that if I comment out the “model-engine-file=…” and “proto-file…” lines, it should generate a new .cfg file, but this didn’t seem to affect the error output at all.
Am I missing something? Should I attempt to use some of the yoloV3 configuration files?
No, unfortunately it is just a .weights file. Does this mean I would continue using the preset caffemodel but point it to my own .weights? Or how would I go about this? Thank you.
Another thing, so it seems there’s a lot less information to go on for the Python bindings (which I need), however, I decided to test out the regular “objectDetector_Yolo” to get a feel for that. I copied over the yolov3.cfg and yolov3.weights files to the directory, and ran:
“deepstream-app -c deepstream_app_config_yoloV3.txt”. It seems to run fine, however, when it starts to build/process the file through TRT, it outputs “conv_1, leaky_1, …” up until “conv_107. yolo_107,”, and then freezes, and after a few minutes says I do not have enough resources to apparently run it?
What might I be doing wrong here? Thank you again.
Sure. So basically I’m just trying to test out the “deepstream_app_config_yoloV3.txt” with the standard settings within deepstream-5.1/sources/objectDetector_Yolo. When I first installed and built Deepstream, it gave errors when running it through the command “deepstream-app -c deepstream_app_config_yoloV3.txt”, which said it couldn’t find yolov3.cfg. So from the YoloV3 folder I had (from AlexeyAB’s darknet), I copied over the yolov3.cfg and yolov3.weights files into the objectDetector_Yolo folder.
When trying to run the deepstream app, everything seems to load fine at the start, but when it gets to the TensorRT part, it seems to get up to a certain point, then freezes and does not output any sort of error.
What I have in mind is that people are saying in their cases they used DeepStrean 5.0 with JetPack 4.4, which I can try on another SD card, but I’m confused as to why this freezes and doesn’t output anything or open any camera terminal.
How long did you wait?
From the log, it was build the TensorRT engine, this steps needs some minutes (without log print from the terminal during that time).