I’ve recently re-installed all NVCaffe, DIGITS, Jetson-inference and the environment seems to have changed slightly such that I can no longer run the following without errors:
$ cd jetson-inference/build/aarch64/bin
$ NET=DetectNet-COCO-Dog
$ ./detectnet-camera \
--prototxt=$NET/deploy.prototxt \
--model=$NET/snapshot_iter_38600.caffemodel \
It seems that, although I specify: NET=DetectNet-COCO-Dog, there’s no search for a cache file at this location.
Detectnet-camera works fine with the following, which should have produced a cache somewhere?
$ cd jetson-inference/build/aarch64/bin
$ ./detectnet-camera coco-dog
Here’s the full error … Please advise!
nvidia@tegra-ubuntu:~$ cd jetson-inference/build/aarch64/bin
nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ NET=DetectNet-COCO-Dog
nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./detectnet-camera \
> --prototxt=$NET/deploy.prototxt \
> --model=$NET/snapshot_iter_38600.caffemodel \
>
detectnet-camera
args (3): 0 [./detectnet-camera] 1 [--prototxt=DetectNet-COCO-Dog/deploy.prototxt] 2 [--model=DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel]
[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA
[gstreamer] gstCamera pipeline string:
nvcamerasrc fpsRange="30.0 30.0" ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVCAMERA
detectnet-camera: successfully initialized video device
width: 1280
height: 720
depth: 12 (bpp)
detectNet -- loading detection network model from:
-- prototxt DetectNet-COCO-Dog/deploy.prototxt
-- model DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- class_labels NULL
-- threshold 0.500000
-- batch_size 2
[TRT] TensorRT version 4.0.2
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading
[TRT] CaffeParser: Could not open file
[TRT] CaffeParser: Could not parse model file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load
detectNet -- failed to initialize.
detectnet-camera: failed to initialize imageNet
nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$
$ cd jetson-inference/build/aarch64/bin
$ NET=networks/DetectNet-COCO-Dog
$ ./detectnet-camera
–prototxt=$NET/deploy.prototxt
–model=$NET/snapshot_iter_38600.caffemodel \
Hi,
The log indicates that the application cannot find the model path correctly.
As you already found, you can solve this issue by setting the $NET parameter:
[url]https://github.com/dusty-nv/jetson-inference#loading-custom-models-on-jetson[/url]
Thanks.
Hello I am trying to implement jetson-inference tutorial but I after I got to the building your rpo from source i got this error
[TRT] TensorRT version 3.0
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading
could not open file
can you help me
Hi,
Please follow this page to build jetson_inference from source:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo.md#building-the-repo-from-source[/url]
Based on your log, the model is not well-downloaded in your environment.
Have you executed the cmake command successfully?
Thanks.
Hi, AastaLLL
thanks for replying.
yes, I have redownloaded and reinstall the environment but it didn’t work. and the cmake error didn’t show any error.
but when I tried with my own datasets both the image classification and object detection model worked. but it didn’t work with the model that used in the tutorial.
thanks.
Hi aosanshugaa, what command are you running to launch the program from terminal?
hi dusty_nv
when i try the example from the tutorial it doesn’t work
./imagenet-camera googlenet
or
./detectnet-camera \
--prototxt=$NET/deploy.prototxt \
--model=$NET/snapshot_iter_13110.caffemodel \
--labels=$NET/labels.txt \
--input_blob=data \
--output_blob=softmax
it shows this error
[TRT] TensorRT version 3.0
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/bvlc_googlenet.caffemodel
could not open file Could not parse deploy file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load networks/bvlc_googlenet.caffemodel
failed to load networks/bvlc_googlenet.caffemodel
imageNet -- failed to initialize.
imagenet-console: failed to initialize imageNet
but when i try with my own datasets it works
Hi,
I have exactly the same problem with segnet-console using Pascal-VOC dataset.
I run :
./segnet-console test.jpg output_0428.png \
--prototxt=$NET/deploy.prototxt \
--model=$NET/snapshot_iter_8790.caffemodel \
--labels=$NET/pascal-voc-classes.txt \
--input_blob=data \
--output_blob=score_fr
And I get :
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading
[TRT] CaffeParser: Could not open file
[TRT] CaffeParser: Could not parse model file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load
segNet -- failed to initialize.
segnet-console: failed to initialize segnet
I’m also pretty sure to have properly built jetson-inference, Do you have any idea to solve this ?