Warning while running objectDetector_SSD app

I am trying to rune the SSD app on a Jetson Nano. I’ve followed all the instructions in the README. I am getting an error everytime I try to run the app:

Using winsys: x11 
Creating LL OSD context new
0:00:08.102296521 17497   0x7f1c001f80 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer 'MarkOutput_0' in engine

Hi,

We don’t meet the warning in our environment.
The warning indicates that the output layer name in your environment is not as expected.

Could you double check if you generate the uff model with the steps like this:

$ wget http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz
$ cd ssd_inception_v2_coco_2017_11_17
$ python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \
    frozen_inference_graph.pb -O NMS \
    -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
    -o sample_ssd_relu6.uff

If yes, could you share your model with us?
Thanks.

Hey! I get the following output when I try to generate the uff model with the given steps:

Loading frozen_inference_graph.pb
NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
No. nodes: 563
UFF Output written to sample_ssd_relu6.uff

Hey! I decided to switch from SSD_inception_v2 to mobilenet_SSD_v2 and generated the uff file again. I encountered the same warning. Sharing the uff file with you https://drive.google.com/file/d/14HEHf4dgdW_fAacjEdVvJLc9DlZ0vqSb/view?usp=sharing

Can you add “-t” when you do convert_to_uff.py to genereate pbtxt? You can get output-blob-names from pbtxt and set/replace “output-blob-names=MarkOutput_0” in config_infer_primary_ssd.txt