Error while executing the fastest RCNN example on the tlt officialy provided docker in my intel computer

Hello everyone.
I’m here because I’ve been unable to finish the execution of the faster-rcnn tlt example.
I would appreciate any help with this, and I’ll deliver any info that you need.

=========== SOME INFO ===========

I got my key and I replaced my environmental variables correctly, the folder map is also validated.
The only other changes made are the batch sizes since video card memory is only 2 GB.
Everything else is exactly as shown in the example, I’m even using the exact same dataset provided.

I got the docker from here: Transfer Learning Toolkit for Video Streaming Analytics | NVIDIA NGC
using the provided command: docker pull nvcr.io/nvidia/tlt-streamanalytics:v2.0_dp_py2

I have a Nvidia Jetson Nano, which is incompatible with tlt, so I’m running tlt on my computer.

I’ve deactivated the gui on the Jetson Nano, I’m operating only on console mode and by ssh and shared folders with other computers on the network, in order to save some resources.

=========== Errors ===========
I can go through the tutorial smoothly until section eight. I can tarin the model, prune it, retrain the pruned model, and evaluate its metrics.
Problems start with this cell

# Running inference for detection on n images
# Please go to $USER_EXPERIMENT_DIR/data/faster_rcnn/inference_results_imgs_retrain to see the visualizatons.
!tlt-infer faster_rcnn -e $SPECS_DIR/default_spec_resnet18_retrain_spec.txt

Error log

Using TensorFlow backend.
2020-06-08 18:11:24,442 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/examples/faster_rcnn/specs/default_spec_resnet18_retrain_spec.txt.
2020-06-08 18:11:24,471 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/scripts/inference.pyc: Running inference with TensorRT as backend.
2020-06-08 18:11:24,480 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/tensorrt_inference/tensorrt_model.pyc: Loading TensorRT engine file: /workspace/tlt-experiments/data/faster_rcnn/trt.fp16.engine
                        for inference.
2020-06-08 18:11:26,201 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/scripts/inference.pyc: 000000.png
[TensorRT] WARNING: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
#assertion/trt_oss_src/TensorRT/plugin/common/kernels/proposalKernel.cu,709
Aborted (core dumped)

so I can’t see resulting images when the net works.

I have troubles on section nine too. When I try to export the net I got these messages.

Exporting to FP32.

# Export in FP32 mode. \
!tlt-export faster_rcnn -m $USER_EXPERIMENT_DIR/data/faster_rcnn/frcnn_kitti_resnet18_retrain.epoch12.tlt  \
                        -o $USER_EXPERIMENT_DIR/data/faster_rcnn/frcnn_kitti_resnet18_retrain.etlt \
                        -e $SPECS_DIR/default_spec_resnet18_retrain_spec.txt \
                        -k $KEY

Error log

Using TensorFlow backend.
2020-06-08 20:24:11,397 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/examples/faster_rcnn/specs/default_spec_resnet18_retrain_spec.txt.
2020-06-08 20:25:02,715 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/examples/faster_rcnn/specs/default_spec_resnet18_retrain_spec.txt.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
DEBUG: convert reshape to flatten node
Warning: No conversion function registered for layer: CropAndResize yet.
Converting roi_pooling_conv_1/CropAndResize_new as custom op: CropAndResize
Warning: No conversion function registered for layer: Proposal yet.
Converting proposal as custom op: Proposal
DEBUG [/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['proposal', 'dense_class_td/Softmax', 'dense_regress_td/BiasAdd'] as outputs
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 3 output network tensors.

Exporting to FP16.

# Export in FP16 mode. \
# Note that the .etlt model in FP16 mode is  \
# the same as in FP32 mode. \
!rm $USER_EXPERIMENT_DIR/data/faster_rcnn/frcnn_kitti_resnet18_retrain_fp16.etlt
!tlt-export faster_rcnn -m $USER_EXPERIMENT_DIR/data/faster_rcnn/frcnn_kitti_resnet18_retrain.epoch12.tlt  \
                        -o $USER_EXPERIMENT_DIR/data/faster_rcnn/frcnn_kitti_resnet18_retrain_fp16.etlt \
                        -e $SPECS_DIR/default_spec_resnet18_retrain_spec.txt \
                        -k $KEY \
                        --data_type fp16

Error log

 Using TensorFlow backend.
2020-06-08 20:37:51,065 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/examples/faster_rcnn/specs/default_spec_resnet18_retrain_spec.txt.
2020-06-08 20:38:41,524 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/examples/faster_rcnn/specs/default_spec_resnet18_retrain_spec.txt.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
DEBUG: convert reshape to flatten node
Warning: No conversion function registered for layer: CropAndResize yet.
Converting roi_pooling_conv_1/CropAndResize_new as custom op: CropAndResize
Warning: No conversion function registered for layer: Proposal yet.
Converting proposal as custom op: Proposal
DEBUG [/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['proposal', 'dense_class_td/Softmax', 'dense_regress_td/BiasAdd'] as outputs
2020-06-08 20:39:41,057 [ERROR] modulus.export._tensorrt: Specified FP16 but not supported on platform.
Traceback (most recent call last):
File "/usr/local/bin/tlt-export", line 8, in <module>
    sys.exit(main())
File "./common/export/app.py", line 234, in main
File "./common/export/base_exporter.py", line 411, in export
File "./modulus/export/_tensorrt.py", line 515, in __init__
File "./modulus/export/_tensorrt.py", line 380, in __init__
AttributeError: Specified FP16 but not supported on platform.

=========== Discussion ===========

It’s strange to find the warning about the version of TensorFlow installed not being guaranteed to work with UFF, since it’s the images installed version.
I also tryed to parse the FP32 model and the last retrain epoch to the deepstream-app on my jetson nano, the idea was letting it to create the engine file as it did every time I parsed another example, but I got a core dumped.
How could I get the image model to continue with the developing process?

=========== HARDWARE DETAILS ===========

 Machine:   Device: laptop System: Dell product: Inspiron 7559 v: 1.2.9 serial: N/A
            UEFI: Dell v: 1.2.9 date: 09/03/2018
 CPU:       Quad core Intel Core i7-6700HQ (-MT-MCP-) cache: 6144 KB
            clock speeds: max: 3500 MHz 1: 1513 MHz 2: 2220 MHz 3: 1842 MHz
            4: 1430 MHz 5: 1860 MHz 6: 1892 MHz 7: 1971 MHz 8: 1995 MHz
 Graphics:  Card-1: Intel HD Graphics 530
            Card-2: NVIDIA GM107M [GeForce GTX 960M]
            Display Server: x11 (X.Org 1.20.5 )
            drivers: modesetting,nvidia (unloaded: fbdev,vesa,nouveau)
            Resolution: 1920x1080@60.02hz
            OpenGL: renderer: GeForce GTX 960M/PCIe/SSE2
            version: 4.6.0 NVIDIA 440.82
 Drives:    HDD Total Size: 750.2GB (14.9% used)
            ID-1: /dev/sda model: Samsung_SSD_860 size: 250.1GB
            ID-2: /dev/sdb model: CT500MX500SSD1 size: 500.1GB
 Info:      Processes: 365 Uptime: 3:00 Memory: 3920.5/7827.7MB
            Client: Shell (bash) inxi: 2.3.56 

=========== SOFTWARE DETAILS ===========

  System:    Host: jpablo-Inspiron-7559 Kernel: 5.3.0-53-generic x86_64
             bits: 64
             Desktop: Gnome 3.28.4 Distro: Ubuntu 18.04.4 LTS
  Docker:    Server Version: 19.03.8
  	         Image repository: nvcr.io/nvidia/tlt-streamanalytics
  	         Image tag: v2.0_dp_py2
  Nvidia:    Driver Version: 440.82
  	         CUDA Version 10.2.89
  Machine:   Device: laptop System: Dell product: Inspiron 7559 v: 1.2.9 serial: N/A
             UEFI: Dell v: 1.2.9 date: 09/03/2018

============== ON JETSON TEST ============
I tryed to make the deepstream-app to compile the net directly on the Jetson Nano.
Here is an extract of the config file that’s suppoused to use the net

[primary-gie]
enable=1
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt.engine
batch-size=8
#Required by the app for OSD, not a plugin property
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=10
gie-unique-id=1
nvbuf-memory-type=2
config-file=config_infer_controlflow.txt

And here is the config_infer_controlflow.txt

# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=<I'm not posting my key on the internet>
#tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_pf16.etlt
#tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt
tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt
labelfile-path=../models/Controlflow_tlt/labels.txt
#int8-calib-file=../models/Controlflow_tlt/dashcamnet_int8.txt
#model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_fp16.etlt.engine
#model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt.engine
model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt.engine
#input-dims=3;384;1248;0
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size= 1 #8 #3
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=2
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

================= Clarification ===============
I’m running the tlt image on my personal computer, described above, not on the jetson nano.

Hi,

Since Nano has limited memory, could you monitor the system status to check if OOM error first?

$ sudo tegrastats

Thanks.

Yes, of course.
On Nano, when executing this:
deepstream-app -c /opt/nvidia/deepstream/deepstream/controlflow/config/controlflow3.txt
I got this log

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555/ds-test ***


(deepstream-app:3897): GLib-GObject-WARNING **: 09:42:54.709: value "((GstNvVidConvBufMemoryType) 2)" of type 'GstNvVidConvBufMemoryType' is invalid or out of range for property 'nvbuf-memory-type' of type 'GstNvVidConvBufMemoryType'
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt.engine open error
0:00:01.572557814  3897     0x167fb930 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt.engine failed
0:00:01.572652972  3897     0x167fb930 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.epoch12.tlt.engine failed, try rebuild
0:00:01.572687035  3897     0x167fb930 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: [TRT]: Parameter check failed at: ../builder/Network.cpp::addInput::1012, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: [TRT]: UFFParser: Failed to parseInput for node input_image
ERROR: [TRT]: UffParser: Parser error: input_image: Failed to parse node - Invalid Tensor found at node input_image
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.772185885  3897     0x167fb930 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

The output I get on the sudo tegrastats terminal (while the other command is being processed on another tab) is this one.

RAM 972/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [13%@1479,7%@1479,3%@1479,98%@1479] EMC_FREQ 3%@1600 GR3D_FREQ 7%@921 APE 25 PLL@30.5C CPU@33C PMIC@100C GPU@32.5C AO@35.5C thermal@32.75C POM_5V_IN 3394/3394 POM_5V_GPU 245/245 POM_5V_CPU 1429/1429
RAM 1102/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [24%@1479,7%@1479,5%@1479,84%@1479] EMC_FREQ 3%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@33C PMIC@100C GPU@32.5C AO@35.5C thermal@32.5C POM_5V_IN 2876/3135 POM_5V_GPU 123/184 POM_5V_CPU 1027/1228
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [33%@1479,3%@1479,6%@1479,4%@1479] EMC_FREQ 3%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32.5C PMIC@100C GPU@32.5C AO@35C thermal@32.25C POM_5V_IN 2070/2780 POM_5V_GPU 124/164 POM_5V_CPU 248/901
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [1%@1479,3%@1479,0%@1479,0%@1479] EMC_FREQ 3%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32C POM_5V_IN 2070/2602 POM_5V_GPU 124/154 POM_5V_CPU 248/738
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [0%@1479,7%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32.25C POM_5V_IN 2070/2496 POM_5V_GPU 124/148 POM_5V_CPU 248/640
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [1%@1479,7%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32C AO@35C thermal@32.25C POM_5V_IN 2070/2425 POM_5V_GPU 124/144 POM_5V_CPU 248/574
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [1%@1479,5%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@29.5C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32.25C POM_5V_IN 2070/2374 POM_5V_GPU 124/141 POM_5V_CPU 248/528
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [3%@1479,0%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32C POM_5V_IN 2070/2336 POM_5V_GPU 124/139 POM_5V_CPU 248/493
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [0%@1479,0%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32.25C POM_5V_IN 2070/2306 POM_5V_GPU 124/137 POM_5V_CPU 248/465
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [1%@1479,0%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@29.5C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32.25C POM_5V_IN 2070/2283 POM_5V_GPU 124/136 POM_5V_CPU 248/444
RAM 795/3956MB (lfb 148x4MB) SWAP 0/1978MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [1%@1479,0%@1479,0%@1479,0%@1479] EMC_FREQ 2%@1600 GR3D_FREQ 0%@921 APE 25 PLL@30C CPU@32C PMIC@100C GPU@32.5C AO@35C thermal@32C POM_5V_IN 2070/2263 POM_5V_GPU 124/134 POM_5V_CPU 248/426

================== Some more info =========================
(I’ll edit the original post to include this)
I’ve deactivated the gui, I’m operating only on console mode and by ssh and shared folders with other computers on the network, so some resources are saved.

Thanks for the response

By the way, I don’t think it is a OOM error on the Nano, since the first troubles appear on the tlt image that I am running on my personal computer

Hi, I meet the same error.
during runing the SSD example provided by official. nothing change of the config file, and the dataset is KITTI, follow the example.
the training stage is fine, but errors occur when i’m trying to export the model with ‘–data_type fp16’.
i have not test it on jetson(Xavier), I will try it later.
here is the error imformation, which is the same with @ai12
and i want to know, does this error would make it fail to run inference on jetson?

Using TensorFlow backend.
2020-06-11 02:31:45,071 [INFO] /usr/local/lib/python2.7/dist-packages/iva/ssd/utils/spec_loader.pyc: Merging specification from /workspace/examples/ssd/specs/ssd_retrain_resnet18_kitti.txt
2020-06-11 02:31:47,239 [INFO] /usr/local/lib/python2.7/dist-packages/iva/ssd/utils/spec_loader.pyc: Merging specification from /workspace/examples/ssd/specs/ssd_retrain_resnet18_kitti.txt
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_5 as custom op: BatchTilePlugin_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_4 as custom op: BatchTilePlugin_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_3 as custom op: BatchTilePlugin_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_2 as custom op: BatchTilePlugin_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_1 as custom op: BatchTilePlugin_TRT
Warning: No conversion function registered for layer: BatchTilePlugin_TRT yet.
Converting FirstDimTile_0 as custom op: BatchTilePlugin_TRT
DEBUG [/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py:96] Marking [‘NMS’] as outputs
2020-06-11 02:31:56,846 [ERROR] modulus.export._tensorrt: Specified FP16 but not supported on platform.
Traceback (most recent call last):
File “/usr/local/bin/tlt-export”, line 8, in
sys.exit(main())
File “./common/export/app.py”, line 234, in main
File “./common/export/base_exporter.py”, line 411, in export
File “./modulus/export/_tensorrt.py”, line 515, in init
File “./modulus/export/_tensorrt.py”, line 380, in init
AttributeError: Specified FP16 but not supported on platform.

================ SPEC FILES ======================
Here I share the specs I’m using

default_spec_resnet18.txt

# Copyright (c) 2017-2019, NVIDIA CORPORATION.  All rights reserved.
random_seed: 42
enc_key: <of course I'm not posting my key on the internet>
verbose: True
network_config {
input_image_config {
image_type: RGB
image_channel_order: 'bgr'
size_height_width {
height: 384
width: 1248
}
    image_channel_mean {
        key: 'b'
        value: 103.939
}
    image_channel_mean {
        key: 'g'
        value: 116.779
}
    image_channel_mean {
        key: 'r'
        value: 123.68
}
image_scaling_factor: 1.0
max_objects_num_per_image: 100
}
feature_extractor: "resnet:18"
anchor_box_config {
scale: 64.0
scale: 128.0
scale: 256.0
ratio: 1.0
ratio: 0.5
ratio: 2.0
}
freeze_bn: True
freeze_blocks: 0
freeze_blocks: 1
roi_mini_batch: 8 #256
rpn_stride: 16
conv_bn_share_bias: True
roi_pooling_config {
pool_size: 7
pool_size_2x: False
}
all_projections: True
use_pooling:False
}
training_config {
kitti_data_config {
  data_sources: {
    tfrecords_path: "/workspace/tlt-experiments/tfrecords/kitti_trainval/kitti_trainval*"
    image_directory_path: "/workspace/tlt-experiments/data/training"
  }
image_extension: 'png'
target_class_mapping {
key: 'car'
value: 'car'
}
target_class_mapping {
key: 'van'
value: 'car'
}
target_class_mapping {
key: 'pedestrian'
value: 'person'
}
target_class_mapping {
key: 'person_sitting'
value: 'person'
}
target_class_mapping {
key: 'cyclist'
value: 'cyclist'
}
validation_fold: 0
}
data_augmentation {
preprocessing {
output_image_width: 1248
output_image_height: 384
output_image_channel: 3
min_bbox_width: 1.0
min_bbox_height: 1.0
}
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 0
translate_max_y: 0
}
color_augmentation {
hue_rotation_max: 0.0
saturation_shift_max: 0.0
contrast_scale_max: 0.0
contrast_center: 0.5
}
}
enable_augmentation: True
batch_size_per_gpu: 1 #16
num_epochs: 12
pretrained_weights: "/workspace/tlt-experiments/data/faster_rcnn/resnet_18.hdf5"
#resume_from_model: "/workspace/tlt-experiments/data/faster_rcnn/resnet18.epoch2.tlt"
output_model: "/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18.tlt"
rpn_min_overlap: 0.3
rpn_max_overlap: 0.7
classifier_min_overlap: 0.0
classifier_max_overlap: 0.5
gt_as_roi: False
std_scaling: 1.0
classifier_regr_std {
key: 'x'
value: 10.0
}
classifier_regr_std {
key: 'y'
value: 10.0
}
classifier_regr_std {
key: 'w'
value: 5.0
}
classifier_regr_std {
key: 'h'
value: 5.0
}

rpn_mini_batch: 8 #256
rpn_pre_nms_top_N: 12000
rpn_nms_max_boxes: 2000
rpn_nms_overlap_threshold: 0.7

reg_config {
reg_type: 'L2'
weight_decay: 1e-4
}

optimizer {
adam {
lr: 0.00001
beta_1: 0.9
beta_2: 0.999
decay: 0.0
}
}

lr_scheduler {
step {
base_lr: 0.00016
gamma: 1.0
step_size: 30
}
}

lambda_rpn_regr: 1.0
lambda_rpn_class: 1.0
lambda_cls_regr: 1.0
lambda_cls_class: 1.0

inference_config {
images_dir: '/workspace/tlt-experiments/data/testing/image_2'
model: '/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18.epoch12.tlt'
detection_image_output_dir: '/workspace/tlt-experiments/data/faster_rcnn/inference_results_imgs'
labels_dump_dir: '/workspace/tlt-experiments/data/faster_rcnn/inference_dump_labels'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
}

evaluation_config {
model: '/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18.epoch12.tlt'
labels_dump_dir: '/workspace/tlt-experiments/data/faster_rcnn/test_dump_labels'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
object_confidence_thres: 0.0001
use_voc07_11point_metric:False
}

}

default_spec_resnet18_retrain_spec.txt

# Copyright (c) 2017-2019, NVIDIA CORPORATION.  All rights reserved.
random_seed: 42
enc_key:  <of course I'm not posting my key on the internet>
verbose: True
network_config {
input_image_config {
image_type: RGB
image_channel_order: 'bgr'
size_height_width {
height: 384
width: 1248
}
    image_channel_mean {
        key: 'b'
        value: 103.939
}
    image_channel_mean {
        key: 'g'
        value: 116.779
}
    image_channel_mean {
        key: 'r'
        value: 123.68
}
image_scaling_factor: 1.0
max_objects_num_per_image: 100
}
feature_extractor: "resnet:18"
anchor_box_config {
scale: 64.0
scale: 128.0
scale: 256.0
ratio: 1.0
ratio: 0.5
ratio: 2.0
}
freeze_bn: True
freeze_blocks: 0
freeze_blocks: 1
roi_mini_batch: 8 #256
rpn_stride: 16
conv_bn_share_bias: True
roi_pooling_config {
pool_size: 7
pool_size_2x: False
}
all_projections: True
use_pooling:False
}
training_config {
kitti_data_config {
  data_sources: {
    tfrecords_path: "/workspace/tlt-experiments/tfrecords/kitti_trainval/kitti_trainval*"
    image_directory_path: "/workspace/tlt-experiments/data/training"
  }
image_extension: 'png'
target_class_mapping {
key: 'car'
value: 'car'
}
target_class_mapping {
key: 'van'
value: 'car'
}
target_class_mapping {
key: 'pedestrian'
value: 'person'
}
target_class_mapping {
key: 'person_sitting'
value: 'person'
}
target_class_mapping {
key: 'cyclist'
value: 'cyclist'
}
validation_fold: 0
}
data_augmentation {
preprocessing {
output_image_width: 1248
output_image_height: 384
output_image_channel: 3
min_bbox_width: 1.0
min_bbox_height: 1.0
}
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 0
translate_max_y: 0
}
color_augmentation {
hue_rotation_max: 0.0
saturation_shift_max: 0.0
contrast_scale_max: 0.0
contrast_center: 0.5
}
}
enable_augmentation: True
batch_size_per_gpu: 1 #16
num_epochs: 12
retrain_pruned_model: "/workspace/tlt-experiments/data/faster_rcnn/model_1_pruned.tlt"
output_model: "/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18_retrain.tlt"
rpn_min_overlap: 0.3
rpn_max_overlap: 0.7
classifier_min_overlap: 0.0
classifier_max_overlap: 0.5
gt_as_roi: False
std_scaling: 1.0
classifier_regr_std {
key: 'x'
value: 10.0
}
classifier_regr_std {
key: 'y'
value: 10.0
}
classifier_regr_std {
key: 'w'
value: 5.0
}
classifier_regr_std {
key: 'h'
value: 5.0
}

rpn_mini_batch: 8 #256
rpn_pre_nms_top_N: 12000
rpn_nms_max_boxes: 2000
rpn_nms_overlap_threshold: 0.7

reg_config {
reg_type: 'L2'
weight_decay: 1e-4
}

optimizer {
adam {
lr: 0.00001
beta_1: 0.9
beta_2: 0.999
decay: 0.0
}
}

lr_scheduler {
step {
base_lr: 0.00016
gamma: 1.0
step_size: 30
}
}

lambda_rpn_regr: 1.0
lambda_rpn_class: 1.0
lambda_cls_regr: 1.0
lambda_cls_class: 1.0

inference_config {
images_dir: '/workspace/tlt-experiments/data/testing/image_2'
model: '/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18_retrain.epoch12.tlt'
detection_image_output_dir: '/workspace/tlt-experiments/data/faster_rcnn/inference_results_imgs_retrain'
labels_dump_dir: '/workspace/tlt-experiments/data/faster_rcnn/inference_dump_labels_retrain'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
trt_inference {
trt_engine: '/workspace/tlt-experiments/data/faster_rcnn/trt.int8.engine'
#trt_engine: '/workspace/tlt-experiments/data/faster_rcnn/trt.fp16.engine'
trt_data_type: 'int8'
#trt_data_type: 'fp16'
}
}

evaluation_config {
model: '/workspace/tlt-experiments/data/faster_rcnn/frcnn_kitti_resnet18_retrain.epoch12.tlt'
labels_dump_dir: '/workspace/tlt-experiments/data/faster_rcnn/test_dump_labels_retrain'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
object_confidence_thres: 0.0001
use_voc07_11point_metric:False
}

}

Move this topic from Nano forum to TLT forum.

@ai12
Firstly, some clarification and comments here.

  1. Should run tlt training on your computer. See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation. I think you already be aware of it.
  2. You already export to FP32 successfully. Please ignore the warning “The version of TensorFlow installed on this system is not guaranteed to work with UFF”. See Error at exporting to TRT engine in TLT - #4 by Morganh
  3. For “Specified FP16 but not supported on platform”, that’s because your gpu does not support FP16. See https://developer.nvidia.com/cuda-gpus#compute and Support Matrix :: NVIDIA Deep Learning TensorRT Documentation
1 Like

Hi @Morganh, thanks for your answer.
It is relieving to hear that the FP32 is exported successfully.