Issues running custom model in deepstream

we have issues running a custom model on deepstream using the test2 sample type architecture of the python-binding app
This is what it outputs

moon@moon-desktop:/opt/nvidia/deepstream/deepstream- 
5.0/sources/deepstream_python_apps/apps/FaceBio-Jetson$ python3 deepstream.py 
/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 

Creating Source 

Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Adding elements to Pipeline 


(python3:10345): GStreamer-WARNING **: 22:03:55.576: Name 'nvegl-transform' is not unique in bin 
'pipeline0', not adding
 Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:00.974887439 10345     0x14e7b690 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary- 
inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() 
<nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: Could not find output layer 'conv2d_bbox'
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:02.681484651 10345     0x14e7b690 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: 
<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() 
<nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:02.681537777 10345     0x14e7b690 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: 
<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() 
<nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:02.681634081 10345     0x14e7b690 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: 
<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() 
<nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:02.681726218 10345     0x14e7b690 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary- 
inference> error: Failed to create NvDsInferContext instance
0:00:02.681756427 10345     0x14e7b690 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary- 
inference> error: Config file path: dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git- 
master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): 
/GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi,

Could not find output layer 'conv2d_bbox'

The error indicates that TensorRT cannot find a corresponding output layer.
Please update the output blob name based on your model first.

For example, in config_infer_primary.txt:

[property]
...
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

Thanks.