I’m currently using the detectnet-console and I’m wondering if there are any other pretrained models that are available. I’m actually looking for a model for vehicles. Cars if I need to be more specific. cars and truck would be fine.
If I absolutely need to, I can look at training my own, but I don’t want to jump into that aspect just yet.
There is a relevant pre-trained model in the deepstream package.
A ResNet-18 network for detection of three classes of objects: car, people, and the two-wheeler.
Thank you so much for the help. I downloaded the sdk and I’m looking through the package. Are you speaking of the resnet18.caffemodel? So I should be able to just run detectnet-console with it like this
./detectnet-console dog_1.jpg output_1.jpg resnet18.caffemodel or is there any further steps I need to take?
Hello, I’m having an issue with the resnet18 model recommended.
I relocated the resnet18 folder in the networks folder located at jetson-inference/data/networks. moved to the build folder and re-compiled. cmake … and make.
So what I ended up doing was deleting the build directory and starting fresh. Moved Resnet18 directory into the data/networks directory and then returned to a fresh build directory and did cmake … After several minutes of cmake running I then proceeded to make again.
After which I once again went to bin and tested it out. I got the same results.
The results that RaviKiranK posted are what I got with the exact same command. The link you provided looked just like code, not actually steps for me to follow. is this the case?
hey, before I forget thank you for your help.
RaviKirank,
what you posted is exactly the results that I’m seeing. Any insight?
Just checking detectnet-console with a custom model. It can run correctly in our environment.
Could you try to set the absolute path to NET and give it a try?
Update our steps for your reference: 1. Put our custom model to HOME (/home/nvidia/DetectNet)
So I figured out what I was doing wrong. Its now working proper, mostly. I’m getting some errors and failed attempts.
[GIE] TensorRT version 2.1, build 2102
[GIE] attempting to open cache file /home/nvidia/jetson-inference/build/aarch64/bin/networks/resenet18/resnet18.caffemodel.2.tensorcache
[GIE] cache file not found, profiling network model
[GIE] plateform has FP16 support
[GIE] loading /home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18/deploy.prototxt /home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18/resnet18.caffemodel
[GIE] failed to retrieve tensor for output 'coverage'
[GIE] failed to retrieve tensor for output 'bboxes'
Segmentation fault (core dumped)
When I downloaded the deepstream sdk the folder resnet18 only had four files in it.
NVidia@tegra-ubuntu:~/tensorrt/bin$ ./giexec --deploy=/home/NVidia/jetson-inference/build/aarch64/bin/networks/resnet18 --output=coverage --output-bboxes
deploy: /home/NVidia/jetson-inference/build/aarch64/bin/networks/resnet18
output: coverage
output: bboxes
could not find output blob coverage
Engine could not be created
Engine could not be created
Nvidia@tegra-ubuntu:~/tensorrt/bin$
Back to your problem, DeepStream and jetson_inference are two different frameworks for the object detection use case.
The network and network output handling may have some difference.
thank you for the link. I’ve been watching a number of youtube videos on it. The one thing that I’m having questions regarding digits is how to adjust the batch size, learning rate, and etc based on the number of GPU’s I’m using.