An error occurred while converting .etlt to .engine

I use tlt3.0 to train and export the .etlt file of the ssd when I use the tlt-converter to convert to the .engine file on the 2gb version of the nano, the following error occurs:


any idea?thanks

It is out-of memory.
See below, please change “-m” or “-w”. For example, -w 1000000000

$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
[-k ENCODE_KEY] [-c CACHE_FILE]
[-o OUTPUTS] [-d INPUT_DIMENSIONS]
[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
[-i INPUT_ORDER] [-s] [-u DLA_CORE]
input_file

Generate TensorRT engine from exported model

positional arguments:
input_file Input file (.etlt exported model).

required flag arguments:
-d comma separated list of input dimensions
-k model encoding key

optional flag arguments:
-b calibration batch size (default 8)
-c calibration cache file (default cal.bin)
-e file the engine is saved to (default saved.engine)
-i input dimension ordering – nchw, nhwc, nc (default nchw)
-m maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly
-o comma separated list of output node names (default none)
-p comma separated list of optimization profile shapes in the format <min_shape>,<opt_shape>,<max_shape>, where each shape has the format: xxx. This argument is only useful in dynamic shape case.
-s TensorRT strict_type_constraints flag for INT8 mode(default false)
-t TensorRT data type – fp32, fp16, int8 (default fp32)
-u Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it’ll always allow GPU fallback)
-w maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly