model file 'networks/Deep-Homography-Webcam-320/deep_homography_webcam_320.onnx' was not found.

my question is:
when execute ./homography-camera, the error is Deep-Homography-Webcam-320 can’t found,
but execure ./download-models.sh also can’t find this model.
I can’t find Deep-Homography-Webcam-320 model when execute download-models.sh

linux@linux-desktop:~/jetson-inference/build/aarch64/bin$ ./homography-camera 
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS, camera 0

homography-camera:  successfully initialized camera device
    width:  1280
   height:  720
    depth:  12 (bpp)

homographyNet -- loading homography network model from:
         -- model        networks/Deep-Homography-Webcam-320/deep_homography_webcam_320.onnx
         -- input_blob   'input_0'
         -- output_blob  'output_0'
         -- batch_size   1

[TRT]   TensorRT version 5.1.6
[TRT]   loading NVIDIA plugins...
[TRT]   Plugin Creator registration succeeded - GridAnchor_TRT
[TRT]   Plugin Creator registration succeeded - NMS_TRT
[TRT]   Plugin Creator registration succeeded - Reorg_TRT
[TRT]   Plugin Creator registration succeeded - Region_TRT
[TRT]   Plugin Creator registration succeeded - Clip_TRT
[TRT]   Plugin Creator registration succeeded - LReLU_TRT
[TRT]   Plugin Creator registration succeeded - PriorBox_TRT
[TRT]   Plugin Creator registration succeeded - Normalize_TRT
[TRT]   Plugin Creator registration succeeded - RPROI_TRT
[TRT]   Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT]   completed loading NVIDIA plugins.
[TRT]   detected model format - ONNX  (extension '.onnx')
[TRT]   desired precision specified for GPU: FASTEST
[TRT]   requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]   native precisions detected for GPU:  FP32, FP16
[TRT]   selecting fastest native precision for GPU:  FP16
[TRT]   attempting to open engine cache file .1.1.GPU.FP16.engine
[TRT]   cache file not found, profiling network model on device GPU

<b>error:  model file 'networks/Deep-Homography-Webcam-320/deep_homography_webcam_320.onnx' was not found.</b>
        if loading a built-in model, maybe it wasn't downloaded before.

        Run the Model Downloader tool again and select it for download:

           $ cd <jetson-inference>/tools
           $ ./download-models.sh

[TRT]   failed to load homographyNet
homography-camera:   failed to initialize homographyNet

Hi,

You will need to install dialog first.

Please try the following command to setup:

$ sudo apt-get install dialog
$ cd {jetson-inference}/tools
$ ./download-models.sh

Then re-execute the homography-camera again.
Please let us know the result.

Thanks.

Hi AastaLLL

It doesn’t work.
dialog was installed before.

in download-models.sh, I can find Deep-Homography-COCO, but no Deep-Homography-webcam-320

where can I download it?

From the looks of things, I don’t believe that I have that model online - I recall it was just trained on my workshop room, so it probably would need re-trained on your environment. I am travelling at the moment but will look for it when I return.

Sorry for the delay - I found this model, but haven’t tested again with latest JetPack: https://nvidia.box.com/s/3mjb0pkt5di4g114ydm40hj8jrscmlft

I plan on updating these with odometry estimation models based on ResNet backbone, as I recall these previous VGG homography models didn’t produce great results under camera egomotion (which is why I hadn’t uploaded them). These previous VGG homography models estimate the frame displacement (see the Deep Homography paper and PyTorch training code), which then needs to generate the homography matrix using traditional techniques like RANSAC or direct linear transform (DLT). It would seem the homography estimation introduced some error and jitter in the results.

In next experiment, I hope to try estimating the motion in world or screen coordinates directly to see if that performs better.