VehicleTypeNet cannot output results in the process of inference

Hello. When I do the inference on VehicleTypeNet, I found that there was no results generated.

Although there was no error in the process of model inference, there was no result csv files or images generated in the current directory.

Which parts should I modify to deal with the problems ?

Here is the log of inference

==============================
=== TAO Toolkit TensorFlow ===
==============================

NVIDIA Release 4.0.1-TensorFlow (build )
TAO Toolkit Version 4.0.1

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the TAO Toolkit End User License Agreement.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/tao-toolkit-software-license-agreement

NOTE: Mellanox network driver detected, but NVIDIA peer memory driver not
      detected.  Multi-node communication performance may be reduced.

Using TensorFlow backend.
2023-04-02 11:45:33.171096: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)
2023-04-02 11:45:36.312953: I tensorflow/core/platform/profile_utils/cpu_utils.cc:109] CPU Frequency: 2294745000 Hz
2023-04-02 11:45:36.313902: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x832ea70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2023-04-02 11:45:36.313930: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2023-04-02 11:45:36.315144: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2023-04-02 11:45:36.606931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.613992: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.617735: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.622808: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.623343: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x83a2790 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2023-04-02 11:45:36.623367: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla P100-SXM2-16GB, Compute Capability 6.0
2023-04-02 11:45:36.623378: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (1): Tesla P100-SXM2-16GB, Compute Capability 6.0
2023-04-02 11:45:36.623385: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (2): Tesla P100-SXM2-16GB, Compute Capability 6.0
2023-04-02 11:45:36.623394: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (3): Tesla P100-SXM2-16GB, Compute Capability 6.0
2023-04-02 11:45:36.625397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.625624: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1669] Found device 0 with properties: 
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:00:06.0
2023-04-02 11:45:36.625722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.625941: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1669] Found device 1 with properties: 
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:00:07.0
2023-04-02 11:45:36.626003: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.626206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1669] Found device 2 with properties: 
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:00:08.0
2023-04-02 11:45:36.626259: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.626460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1669] Found device 3 with properties: 
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:00:09.0
2023-04-02 11:45:36.626499: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-04-02 11:45:36.626582: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2023-04-02 11:45:36.629780: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2023-04-02 11:45:36.629881: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2023-04-02 11:45:36.630674: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2023-04-02 11:45:36.631639: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2023-04-02 11:45:36.631729: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2023-04-02 11:45:36.631814: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.632096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.632350: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.632595: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.632848: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.633077: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.633304: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.633524: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:36.633746: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1797] Adding visible gpu devices: 0, 1, 2, 3
2023-04-02 11:45:36.633783: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-04-02 11:45:37.804274: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1209] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-04-02 11:45:37.804325: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1215]      0 1 2 3 
2023-04-02 11:45:37.804334: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1228] 0:   N Y Y Y 
2023-04-02 11:45:37.804339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1228] 1:   Y N Y Y 
2023-04-02 11:45:37.804344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1228] 2:   Y Y N Y 
2023-04-02 11:45:37.804349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1228] 3:   Y Y Y N 
2023-04-02 11:45:37.804692: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.805008: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.805271: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.805521: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.805826: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.806070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1354] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14978 MB memory) -> physical GPU (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0000:00:06.0, compute capability: 6.0)
2023-04-02 11:45:37.806703: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.806956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1354] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 15232 MB memory) -> physical GPU (device: 1, name: Tesla P100-SXM2-16GB, pci bus id: 0000:00:07.0, compute capability: 6.0)
2023-04-02 11:45:37.807489: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.807739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1354] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 15232 MB memory) -> physical GPU (device: 2, name: Tesla P100-SXM2-16GB, pci bus id: 0000:00:08.0, compute capability: 6.0)
2023-04-02 11:45:37.808189: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-02 11:45:37.808442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1354] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 15232 MB memory) -> physical GPU (device: 3, name: Tesla P100-SXM2-16GB, pci bus id: 0000:00:09.0, compute capability: 6.0)
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
/usr/local/lib/python3.6/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.5) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)
INFO: Loading experiment spec at /workspace/tao-experiments/vehicletype/classification_retrain_spec.cfg.
Telemetry data couldn't be sent, but the command ran successfully.
[WARNING]: <urlopen error [Errno -2] Name or service not known>
Execution status: PASS

Here is the specs file (classification_retrain_spec.cfg)

model_config {
  arch: "resnet",
  n_layers: 18
  use_batch_norm: true
  all_projections: true
  input_image_size: "3,224,224"
}
train_config {
  train_dataset_path: "/workspace/tao-experiments/vehicletype/data/train"
  val_dataset_path: "/workspace/tao-experiments/vehicletype/data/val"
  pretrained_model_path: "/workspace/tao-experiments/vehicletype/resnet18_vehicletypenet.tlt"
  optimizer {
    sgd {
    lr: 0.01
    decay: 0.0
    momentum: 0.9
    nesterov: False
  }
}
  batch_size_per_gpu: 64
  n_epochs: 80
  n_workers: 16
  preprocess_mode: "caffe"
  enable_random_crop: True
  enable_center_crop: True
  label_smoothing: 0.0
  mixup_alpha: 0.1
  reg_config {
    type: "L2"
    scope: "Conv2D,Dense"
    weight_decay: 0.00005
  }
  lr_config {
    step {
      learning_rate: 0.006
      step_size: 10
      gamma: 0.1
    }
  }
}
eval_config {
  eval_dataset_path: "/workspace/tao-experiments/vehicletype/data"
  model_path: "/workspace/tao-experiments/vehicletype/resnet18_vehicletypenet.tlt"
  top_k: 3
  batch_size: 256
  n_workers: 8
  enable_center_crop: True
}

Here is my command

sudo docker run -it --rm -v /home/ubuntu/tao_test_2023/vehicletype/:/workspace/tao-experiments/vehicletype/ 
                            nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5 
                            classification_tf1 inference 
                          -e /workspace/tao-experiments/vehicletype/classification_retrain_spec.cfg 
                          -m /workspace/tao-experiments/vehicletype/resnet18_vehicletypenet.tlt 
                          -k nvidia_tlt 
                          -b 32 
                          -d workspace/tao-experiments/vehicletype/data 
                          -cm /workspace/tao-experiments/vehicletype/classmap.json

Here is the classmap.json modified by me

{"coupe":0, "sedan":1, "SUV":2, "van":3, "large vehicle":4, "truck":5}

Thank you for your help in advance.

Please change the key to tlt_encode.

This key is mentioned in VehicleTypeNet | NVIDIA NGC

Thank you very much !

However. I got a new error: FileNotFoundError: [Errno 2] No such file or directory: ‘workspace/tao-experiments/vehicletype/data/result.csv’

Should I create a file named result.csv in the beginning ?

Should be missing “/”

It works ! Thank you very much !

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.