Description
Hi All,
I am trying to use the built-in SSD-Mobilenet-v1 model in a container, but I would like to put the model uff
and labels
files in a custom directory.
However, I am receiving errors by changing the directory.
How can I tell jetson.inference.DetectNet(…) that my files are somewhere else?
Files:
my-detection.py
networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
networks/SSD-Mobilenet-v1/ssd_coco_labels.txt
This works:
net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-Mobilenet-v1/ssd_coco_labels.txt'])
For example, If I edit the SSD-Mobilenet-v1 to SS-something-else, and edit the detectNet
call correspondingly like this:
net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-somethin-else/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-something-else/ssd_coco_labels.txt'])
Then, I receive this error:
detectNet -- loading detection network model from:
-- model networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff
-- input_blob 'Input'
-- output_blob 'Postprocessor'
-- output_count 'Postprocessor_1'
-- class_labels networks/SSD-Mobilenet-v1/ssd_coco_labels.txt
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 8.2.1
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::ScatterND version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[TRT] Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] [MemUsageChange] Init CUDA: CPU +229, GPU +0, now: CPU 254, GPU 3206 (MiB)
[TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 254 MiB, GPU 3208 MiB
[TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 284 MiB, GPU 3237 MiB
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .1.1.8201.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
error: model file 'networks/SSD-Mobilenet-v1/ssd_mobilenet_v1_coco.uff' was not found.
if loading a built-in model, maybe it wasn't downloaded before.
Run the Model Downloader tool again and select it for download:
$ cd <jetson-inference>/tools
$ ./download-models.sh
[TRT] detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
File "my-detection.py", line 38, in <module>
net = jetson.inference.detectNet(argv=['--network=ssd-mobilenet-v1', '--model=networks/SSD-somethin-else/ssd_mobilenet_v1_coco.uff', '--labels=networks/SSD-something-else/ssd_coco_labels.txt'])
Exception: jetson.inference -- detectNet failed to load network
I checked these topics as well, but no luck yet.
https://forums.developer.nvidia.com/t/custom-trained-model-detectnet-jetson-inference/172940/6
https://forums.developer.nvidia.com/t/exception-jetson-inference-detectnet-failed-to-load-network/159382
https://forums.developer.nvidia.com/t/using-re-trained-model-inside-python-script/172718
Environment
TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered