Hello, I tried to export a DetectNet_v2 model in INT8 mode to get calibration.bin
but I got this error:
!tlt-export $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
-o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_INT8.etlt \
--outputs output_cov/Sigmoid,output_bbox/BiasAdd \
-k $KEY \
--input_dims 3,720,1280 \
--max_workspace_size 1100000 \
--export_module detectnet_v2 \
--cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor \
--data_type int8 \
--batches 10 \
--cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin \
--cal_batch_size 4 \
--verbose
Using TensorFlow backend.
2019-11-27 11:13:41,293 [INFO] iva.common.magnet_export: Loading model from /workspace/tlt-experiments/experiment_dir_unpruned/weights/resnet18_detector.tlt
2019-11-27 11:13:41.294458: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-27 11:13:41.338340: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-11-27 11:13:41.338832: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5f9ce60 executing computations on platform CUDA. Devices:
2019-11-27 11:13:41.338857: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 950M, Compute Capability 5.0
2019-11-27 11:13:41.359972: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2593905000 Hz
2019-11-27 11:13:41.361109: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x6460fd0 executing computations on platform Host. Devices:
2019-11-27 11:13:41.361142: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-11-27 11:13:41.361341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 950M major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:0a:00.0
totalMemory: 3.95GiB freeMemory: 3.69GiB
2019-11-27 11:13:41.361372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-27 11:13:41.520957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-27 11:13:41.521002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-11-27 11:13:41.521012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-11-27 11:13:41.521151: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3448 MB memory) -> physical GPU (device: 0, name: GeForce GTX 950M, pci bus id: 0000:0a:00.0, compute capability: 5.0)
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-11-27 11:13:48,201 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-11-27 11:14:05.070637: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-27 11:14:05.070736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-27 11:14:05.070775: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-11-27 11:14:05.070790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-11-27 11:14:05.070916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3448 MB memory) -> physical GPU (device: 0, name: GeForce GTX 950M, pci bus id: 0000:0a:00.0, compute capability: 5.0)
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:249: __init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
2019-11-27 11:14:07,686 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:249: __init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-11-27 11:14:08,725 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-11-27 11:14:09.033192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-27 11:14:09.033272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-27 11:14:09.033296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-11-27 11:14:09.033316: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-11-27 11:14:09.033405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3448 MB memory) -> physical GPU (device: 0, name: GeForce GTX 950M, pci bus id: 0000:0a:00.0, compute capability: 5.0)
INFO:tensorflow:Restoring parameters from /tmp/tmpPkudrd.ckpt
2019-11-27 11:14:09,179 [INFO] tensorflow: Restoring parameters from /tmp/tmpPkudrd.ckpt
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
2019-11-27 11:14:09,434 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2019-11-27 11:14:09,435 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
INFO:tensorflow:Froze 130 variables.
2019-11-27 11:14:09,554 [INFO] tensorflow: Froze 130 variables.
INFO:tensorflow:Converted 130 variables to const ops.
2019-11-27 11:14:09,600 [INFO] tensorflow: Converted 130 variables to const ops.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
2019-11-27 11:14:40,368 [INFO] iva.common.magnet_export: Calibrating the exported model. Please don't panic as this may take a while.
2019-11-27 11:14:40,368 [ERROR] modulus.export._tensorrt: Specified INT8 but not supported on platform.
Traceback (most recent call last):
File "/usr/local/bin/tlt-export", line 10, in <module>
sys.exit(main())
File "./common/magnet_export.py", line 206, in main
File "./common/magnet_export.py", line 491, in magnet_export
File "./modulus/export/_tensorrt.py", line 515, in __init__
File "./modulus/export/_tensorrt.py", line 385, in __init__
AttributeError: Specified INT8 but not supported on platform.
here is my tlt-export code
!tlt-export $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
-o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_INT8.etlt \
--outputs output_cov/Sigmoid,output_bbox/BiasAdd \
-k $KEY \
--input_dims 3,720,1280 \
--max_workspace_size 1100000 \
--export_module detectnet_v2 \
--cal_data_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.tensor \
--data_type int8 \
--batches 10 \
--cal_cache_file $USER_EXPERIMENT_DIR/experiment_dir_final/calibration.bin \
--cal_batch_size 4 \
--verbose