How to load models from a custom directory?

Description

Hi All,
I am trying to use the built-in SSD-Mobilenet-v1 model in a container, but I would like to put the model uff and labels files in a custom directory.
However, I am receiving errors by changing the directory.

How can I tell jetson.inference.DetectNet(…) that my files are somewhere else?

Files:
my-detection.py
networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
networks/SSD-Mobilenet-v1/ssd_coco_labels.txt

This works:

net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-Mobilenet-v1/ssd_coco_labels.txt'])

For example, If I edit the SSD-Mobilenet-v1 to SS-something-else, and edit the detectNet call correspondingly like this:

net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-somethin-else/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-something-else/ssd_coco_labels.txt'])

Then, I receive this error:

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
          -- input_blob   'Input'
          -- output_blob  'Postprocessor'
          -- output_count 'Postprocessor_1'
          -- class_labels networks/SSD-Mobilenet-v1/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 8.2.1
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::ScatterND version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT]    Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 254, GPU 3206 (MiB)
[TRT]    [MemUsageSnapshot] Begin constructing builder kernel library: CPU 254 MiB, GPU 3208 MiB
[TRT]    [MemUsageSnapshot] End constructing builder kernel library: CPU 284 MiB, GPU 3237 MiB
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file .1.1.8201.GPU.FP16.engine
[TRT]    cache file not found, profiling network model on device GPU

error:  model file 'networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff' was not found.
        if loading a built-in model, maybe it wasn't downloaded before.

        Run the Model Downloader tool again and select it for download:

           $ cd <jetson-inference>/tools
           $ ./download-models.sh

[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "my-detection.py", line 38, in <module>
    net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-somethin-else/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-something-else/ssd_coco_labels.txt'])
Exception: jetson.inference -- detectNet failed to load network

I checked these topics as well, but no luck yet.
https://forums.developer.nvidia.com/t/custom-trained-model-detectnet-jetson-inference/172940/6
https://forums.developer.nvidia.com/t/exception-jetson-inference-detectnet-failed-to-load-network/159382
https://forums.developer.nvidia.com/t/using-re-trained-model-inside-python-script/172718

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

error:  model file 'networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff' was not found.

The path is related to the executed folder.
Maybe you can try to use the absolute path to see if it works.

Thanks.

I tried the absolute path, but no change.
If I give the --model and --labels the absolute path where the files are located in ./networks/SSD-Mobilenet-v1 directory, it works. But, this is not what I need.
If I give the --model and --labels the absolute path where the files are located in a different directory, it gives this error:

Traceback (most recent call last):
  File "test_detection.py", line 133, in <module>
    initialize_gpu_model()
  File "test_detection.py", line 74, in initialize_gpu_model
    with open(labels_full_path) as file:
FileNotFoundError: [Errno 2] No such file or directory: './networks/SSD-Mobilenet-v1/ssd_coco_labels.txt'

It seems detectNet is just looking for the ./networks/SSD-Mobilenet-v1/ directory.

If I remove ‘–network=ssd-mobilenet-v1’ from my detectNet call, and give the custom directory for model files, it gives a different error:

[TRT]    3: Cannot find binding of given name: data
[TRT]    failed to find requested input layer data in network
[TRT]    device GPU, failed to create resources for CUDA engine
[TRT]    failed to create TensorRT engine for /home/ubuntu/object-detection/ssd-gpu/netw/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff, device GPU
[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "test_detection.py", line 133, in <module>
    initialize_gpu_model()
  File "test_detection.py", line 68, in initialize_gpu_model
    net = jetson.inference.detectNet(argv=['--model=/home/ubuntu/object-detection/ssd-gpu/netw/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff', '--labels=/home/ubuntu/object-detection/ssd-gpu/netw/SSD-Mobilenet-v1/ssd_coco_labels.txt'])
Exception: jetson.inference -- detectNet failed to load network

Yes, IIRC specifying --network=ssd-mobilenet-v1 will override the manual path that you gave it.

Also, there isn’t a CLI / Python interface to loading custom UFF models, because they require additional parameters. Instead you could change it here and rebuild/reinstall the code:

https://github.com/dusty-nv/jetson-inference/blob/384ce60e6ab434bdff5f1973bf4395dcdc9f017d/c/detectNet.cpp#L268

Sorry for the inconvenience about the UFF models.