Confusion in writing Custom Config.txt for deeptsream 6.3 and hence the custom python app

**• Hardware Platform Ubuntu 20.04 with Nvidia GeForce GTX 1080 Ti
**• DeepStream 6.3-triton-multiarch Docker image
**• TensorRT 8.5.3
**• NVIDIA GPU Driver version 555.42.02
**• Issue Type: Aborted (core dumped)

**• Detailed Error:

Frames will be saved in  frames
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 

Unknown or legacy key specified 'input-blob-names' for group [property]
Creating EGLSink 

Playing file file:///opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-tickerdetection/test.mp4
Adding elements to Pipeline 

Linking elements in the Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

0:00:01.942882558 13033      0x2c3e090 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-tickerdetection/det_model.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           360x640         
1   OUTPUT kFLOAT output          180x320         

python3: nvdsinfer_backend.cpp:135: virtual bool nvdsinfer::TrtBackendContext::canSupportBatchDims(int, const NvDsInferBatchDims&): Assertion `m_AllLayers[bindingIdx].inferDims.numDims == batchDims.dims.numDims' failed.
Aborted (core dumped)

I think, I am unable to write correct config.txt file. My current config.txt file is as follows:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=none
#infer-dims=1;360;640
onnx-file=/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-tickerdetection/det_model.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-tickerdetection/det_model.engine
#labelfile-path=</path/to/your/label_file.txt>
#input-blob-names=input
output-blob-names=output
infer-dims=360;640;1
input-blob-names=input;info
force-implicit-batch-dim=1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
interval=0
gie-unique-id=1
#change the input to gray scale
model-color-format=1
network-type=3
#output-blob-names=output
#parse-bbox-func-name=NvDsInferParseCustomSegMask
#custom-lib-path=</path/to/your/customparser.so>

The model I’m using is a custom segmentation model that takes in gray scale images with dimensions, The model takes input as [1,360,640] and outputs [1, 180, 320] as the binary mask. Here when i run python app, it also says that

Unknown or legacy key specified 'input-blob-names' for group [property]

I have read that it’s important to specify the input and output layers names of the tensors being used in the model. Any help on this will be much appreciated.

You can refer to our onnx demo deepstream_yolo to write the config file. Please refer to our Guide to learn some basic meaning of the paramters too.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.