Given there is .engine file & h5, how to incorporate it into Deepstream?

once the model is trained/ provided from the mentioned article;
the integration of the model into the DeepStream will be or will not be out of the DeepStream Support?

Depends on what the issue is.
For an example, if user customize a model which needs a customized post-processing, user should implement it by himself since DeepStream provides the inference for the post-processing.

in given scenario it is image classification model;
that just predicts if the image have the disease or not with some probability
it doesn’t imply post processing, does it?
Moreover, it seems that the model is produced by applying to the dataset the algorithm as follows:

python retrain.py \
  --bottleneck_dir=bottlenecks \
  --how_many_training_steps=300 \
  --model_dir=inception \
  --output_graph=retrained_graph.pb \
  --output_labels=retrained_labels.txt \
  --image_dir=<>

the code above seems agnostic to post processing

if so, I think it should be fine. so, no cercen, right?

after digging deeper into the Intel article it turned out that it misses many puzzles;
However, as it has dataset sources it will be just possible train a model with google AI interface.
After uploading datasets it will become visible which options they would suport for exporting the model.

1 Like

However, following the Intels article: attempt #1.

git clone https://github.com/javathunderman/diabetic-retinopathy-screening
cd diabetic-retinopathy-screening/
git clone https://github.com/Nomikxyz/retinopathy-dataset
mkdir images
cd images
mkdir diseased
mkdir nondiseased
cd ..

then copy ~250 files from folder retinopathy-dataset sympthoms to the diseased folder & ~250 images from folder nosymppthoms to non diseased folder

running the retrain script as per the Inttel’s tutorial:

 python3 retrain.py   --bottleneck_dir=bottlenecks   --how_many_training_steps=300   --model_dir=inception   --output_graph=retrained_graph.pb   --output_labels=retrained_labels.txt   --image_dir=images/
2020-09-03 21:31:50.740715: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:From retrain.py:1063: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

WARNING:tensorflow:From retrain.py:773: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.

W0903 21:31:57.186100 548329693200 module_wrapper.py:139] From retrain.py:773: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.

WARNING:tensorflow:From retrain.py:774: The name tf.gfile.DeleteRecursively is deprecated. Please use tf.io.gfile.rmtree instead.

W0903 21:31:57.186951 548329693200 module_wrapper.py:139] From retrain.py:774: The name tf.gfile.DeleteRecursively is deprecated. Please use tf.io.gfile.rmtree instead.

WARNING:tensorflow:From retrain.py:775: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

W0903 21:31:57.189463 548329693200 module_wrapper.py:139] From retrain.py:775: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From retrain.py:248: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

W0903 21:32:00.193557 548329693200 module_wrapper.py:139] From retrain.py:248: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-09-03 21:32:00.575117: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-03 21:32:00.680931: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:00.681140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-03 21:32:00.681223: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 21:32:00.806538: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-03 21:32:00.927444: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-03 21:32:01.063370: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-03 21:32:01.133752: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-03 21:32:01.194147: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-03 21:32:01.248698: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 21:32:01.250449: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.252056: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.252207: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-03 21:32:01.279084: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-03 21:32:01.279771: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3bd50110 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-03 21:32:01.280047: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-03 21:32:01.370448: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.371520: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3bda7c70 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-03 21:32:01.371653: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2020-09-03 21:32:01.372743: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.372967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-03 21:32:01.373225: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 21:32:01.373406: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-03 21:32:01.373497: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-03 21:32:01.373560: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-03 21:32:01.373710: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-03 21:32:01.373840: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-03 21:32:01.374012: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 21:32:01.374204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.374432: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:01.374519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-03 21:32:01.374623: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 21:32:03.030578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1175] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-03 21:32:03.030881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181]      0 
2020-09-03 21:32:03.030990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1194] 0:   N 
2020-09-03 21:32:03.031640: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:03.032137: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:03.032520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1320] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 261 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
WARNING:tensorflow:From retrain.py:252: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

W0903 21:32:03.078023 548329693200 module_wrapper.py:139] From retrain.py:252: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

2020-09-03 21:32:07.218916: W tensorflow/core/framework/op_def_util.cc:357] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Looking for images in 'diseased'
Looking for images in 'nondiseased'
2020-09-03 21:32:08.310473: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:08.317415: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-03 21:32:08.410976: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 21:32:08.463758: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-03 21:32:08.463979: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-03 21:32:08.476783: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-03 21:32:08.500267: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-03 21:32:08.523746: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-03 21:32:08.547276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 21:32:08.547591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:08.548069: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:08.548238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-03 21:32:08.548783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1175] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-03 21:32:08.548841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181]      0 
2020-09-03 21:32:08.549917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1194] 0:   N 
2020-09-03 21:32:08.550348: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:08.550734: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 21:32:08.551027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1320] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 261 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
Creating bottleneck at bottlenecks/diseased/13638_left.jpeg.txt
2020-09-03 21:32:55.810282: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 21:33:55.315417: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:33:56.602337: W tensorflow/stream_executor/cuda/ptxas_utils.cc:83] Couldn't invoke /usr/local/cuda/bin/ptxas --version
2020-09-03 21:33:59.154034: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:01.168701: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Failed to launch ptxas
Relying on driver to perform ptx compilation. This message will be only logged once.
2020-09-03 21:34:32.605687: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:36.945777: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:37.705258: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:44.310620: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.359441: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.717019: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.723566: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.803360: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.835004: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.873237: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:46.879784: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:34:50.410013: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:35:00.258402: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:36:59.440165: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:37:29.503268: E tensorflow/core/platform/posix/subprocess.cc:208] Start cannot fork() child process: Cannot allocate memory
2020-09-03 21:37:40.754483: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
Killed


After adding 8gb swap file


an addition to existent zram swap the situation seem improved & training started

Following Google AI alternative procedure:
Attempt #1:
from uploaded dataset trainig has started

Hi,
We advise you to train on dGPU instead of on Jetson device.

@amycao
Thank you for following up!
I have access to cloud resources [ Amazon/ GCP etc]
so I trained with Google AI a model based on provided images.
The resulting file is as follows:
https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
also another attemp

python3 retrain.py   --bottleneck_dir=bottlenecks   --how_many_training_steps=300   --model_dir=inception   --output_graph=retrained_graph.pb   --output_labels=retrained_labels.txt   --image_dir=images/

using instruction from


resuilted in
https://storage.googleapis.com/gaze-dev/retrained_labels.txt
https://storage.googleapis.com/gaze-dev/retrained_graph.pb

However, the question is how to pass the pb into triton inference deepstreram?
reference thread Implementing DeepStream/ TRT integration by Intels scenario

Please check SSD sample, sources/objectDetector_SSD, README and code.

@amycao
Thank you for following up!
According to the instruction in the /objectDetector_SSD/README

wget https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
sudo apt-get install python-protobufv
# python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \
   ##      model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS \
     #    -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
        # -o sample_ssd_relu6.uff
python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py          model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
2020-09-21 05:49:22.542630: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Loading model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 274, in from_tensorflow_frozen_model
    with tf.gfile.GFile(frozen_file, "rb") as frozen_pb:
AttributeError: module 'tensorflow' has no attribute 'gfile'
#Tensorflow installed with sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2’
running python 2 will result in
python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py          model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 65, in <module>
    import uff
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/__init__.py", line 49, in <module>
    from uff.converters.tensorflow.conversion_helpers import from_tensorflow  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 59, in <module>
    from .converter_functions import *  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/converter_functions.py", line 59, in <module>
    from uff.converters.tensorflow.converter import TensorFlowToUFFConverter as tf2uff
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 60, in <module>
    from tensorflow.compat.v1 import AttrValue
ImportError: No module named tensorflow.compat.v1
nvidia@nvidia-desktop:~/dev$ 

shall I reinstall tensorflow to 1 version? else?

sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 'tensorflow<2'

the issue then is different:

python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py          model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
2020-09-21 06:04:57.459174: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 275, in from_tensorflow_frozen_model
    graphdef.ParseFromString(frozen_pb.read())
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/message.py", line 199, in ParseFromString
    return self.MergeFromString(serialized)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1145, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 754, in DecodeField
    if value._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 733, in DecodeRepeatedField
    if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1212, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 888, in DecodeMap
    if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1199, in InternalParse
    buffer, new_pos, wire_type)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 989, in _DecodeUnknownField
    (data, pos) = _DecodeUnknownFieldSet(buffer, pos)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 968, in _DecodeUnknownFieldSet
    (data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 993, in _DecodeUnknownField
    raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.

or

 python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py          model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 65, in <module>
    import uff
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/__init__.py", line 49, in <module>
    from uff.converters.tensorflow.conversion_helpers import from_tensorflow  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 59, in <module>
    from .converter_functions import *  # noqa
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/converter_functions.py", line 59, in <module>
    from uff.converters.tensorflow.converter import TensorFlowToUFFConverter as tf2uff
  File "/usr/lib/python2.7/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 60, in <module>
    from tensorflow.compat.v1 import AttrValue
ImportError: No module named tensorflow.compat.v1

On the other hand, while the model above fails,
with Intel scenario retrained inception input the uff file comes up

python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py          retrained_graph.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
2020-09-21 06:07:44.955945: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading retrained_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

DEBUG [/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:143] Marking ['NMS'] as outputs
No. nodes: 2
UFF Output written to sample_ssd_relu6.uff

trying to run

 gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \
>         decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 \
>         height=720 ! nvinfer config-file-path= config_infer_primary_ssd.txt ! \
>         nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.556749523 19613   0x55aa8638c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.557178850 19613   0x55aa8638c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.557236900 19613   0x55aa8638c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)
 cp retrained_labels.txt /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/ssd_coco_labels.txt
 deepstream-app -c deepstream_app_config_ssd.txt
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.224137066 19805     0x3d17b260 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.224347924 19805     0x3d17b260 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.224382102 19805     0x3d17b260 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

Please make sure uff model generated correctly, or the sample running with the model generated will fail.
Follow the README and use the version specified, you will generate the uff model.

Could you extend on how to make sure the uff model generated correctly, please?
do warnings & messages below point out to an issue with uff file? does the uff converter support tensorflow 1.15 only?

 UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF

also

Warning: No conversion function registered for layer: NMS_TRT yet.

also

WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Could you extend which version of what do you reffer to, please?
I just copy-pasted from the readme, then adjusted pointing out to the custom pb file.

You have failures when converting pb file to uff model as the log you pasted on comment 37, then how could the model generated success?

Could you extend, please which exactly failures do you see here? [ it is repost of the latter attempt indicated in 37 post, which wouldn’t show any explicit errors, as it sems to me but warnings]

python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py          retrained_graph.pb -O NMS          -p /usr/src/tensorrt/samples/sampleUffSSD/config.py          -o sample_ssd_relu6.uff
2020-09-21 06:07:44.955945: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading retrained_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

DEBUG [/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:143] Marking ['NMS'] as outputs
No. nodes: 2
UFF Output written to sample_ssd_relu6.uff

if you mean the log you posted above, yes, it do not have errors.
but i see the log from:
python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py -o sample_ssd_relu6.uff
2020-09-21 06:04:57.459174: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

Traceback (most recent call last):
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 143, in
main()
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 139, in main
debug_mode=args.debug
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 275, in from_tensorflow_frozen_model
graphdef.ParseFromString(frozen_pb.read())
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/message.py”, line 199, in ParseFromString
return self.MergeFromString(serialized)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py”, line 1145, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py”, line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 754, in DecodeField
if value._InternalParse(buffer, pos, new_pos) != new_pos:
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py”, line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 733, in DecodeRepeatedField
if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py”, line 1212, in InternalParse
pos = field_decoder(buffer, new_pos, end, self, field_dict)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 888, in DecodeMap
if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py”, line 1199, in InternalParse
buffer, new_pos, wire_type) # pylint: disable=protected-access
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 989, in _DecodeUnknownField
(data, pos) = _DecodeUnknownFieldSet(buffer, pos)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 968, in _DecodeUnknownFieldSet
(data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py”, line 993, in _DecodeUnknownField
raise _DecodeError(‘Wrong wire type in tag.’)
google.protobuf.message.DecodeError: Wrong wire type in tag.

or

python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py -o sample_ssd_relu6.uff
Traceback (most recent call last):
File “/usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py”, line 65, in
import uff
File “/usr/lib/python2.7/dist-packages/uff/bin/…/…/uff/init.py”, line 49, in
from uff.converters.tensorflow.conversion_helpers import from_tensorflow # noqa
File “/usr/lib/python2.7/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 59, in
from .converter_functions import * # noqa
File “/usr/lib/python2.7/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter_functions.py”, line 59, in
from uff.converters.tensorflow.converter import TensorFlowToUFFConverter as tf2uff
File “/usr/lib/python2.7/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 60, in
from tensorflow.compat.v1 import AttrValue
ImportError: No module named tensorflow.compat.v1

Oh, seems you using retrained_graph.pb, then converting to uff model works.

About error, did you change output-blob-names=MarkOutput_0
to
output-blob-names=NMS
in sources/objectDetector_SSD/config_infer_primary_ssd.txt
before run sample.

Thank you for following up!
I am trying with
model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb [GCP aproach]
which is the output from Google AI [ GCP output]
also I am trying with the retrained_graph.pb which is the result of following step-by-step the Intel’s guide [Intels approach]
Howrever, it seems, that I shall try changing output-blob-names=MarkOutput_0 .
Shall it be done for Intels scenario? GCP scenario? both ? Neither of the two?

Moreover, the complication is that there is no comprehensive guide anywhere on how to get from Labeled images dataset [ as In the intels scenario we have from kaggle ~100-500gb dataset with images labeled “bad” or good".
So there is no comprejhensive guide on how to get from images to processing of the model based on teh images in the DeepsTream or TensorRT.
All instructions that I found have several gaps, that eventually prevent two modls that I somehow managed to create to be executed by TRT or DS environment.
It would be useful if there will be a comprehensive instruction on how to do the full cycle - from getting images to getting model created in a way it is supported and then processed by TRT or DS. That is what I am trying to do .

like that?

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD$ cp config_infer_primary_ssd.txt config_infer_primary_ssd.txt_bak
# change the value - updated

then trying to run:

 locate config_infer_primary_ssd.txt
/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_ssd.txt
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps$ cd /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/cbash: cd: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/c: No such file or directory
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps$ cd /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD$ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.442057392 15384   0x559ece28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.442152950 15384   0x559ece28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.442202104 15384   0x559ece28c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

like that?

 deepstream-app -c deepstream_app_config_ssd.txt
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.209769263 15765     0x1d30e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.210014172 15765     0x1d30e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.210168260 15765     0x1d30e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82

here I foud to update the model path in the config file

model-engine-file=sample_ssd_relu6.uff
#_b1_gpu0_fp32.engine

 gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.188299112 15989   0x5593df28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.188462385 15989   0x5593df28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.188596376 15989   0x5593df28c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

another try

 deepstream-app -c deepstream_app_config_ssd.txt
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.190878751 16123     0x364b9c60 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.190945058 16123     0x364b9c60 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.190969028 16123     0x364b9c60 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

after another edit:

 gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.175315185 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.175372020 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.175402198 16239   0x5579f3b2c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:01.175621698 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:01.175650851 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:01.175704646 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:01.175961492 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<nvinfer0> error: Failed to create NvDsInferContext instance
0:00:01.175986805 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<nvinfer0> error: Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR: Pipeline doesn't want to pause.
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to create NvDsInferContext instance
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Setting pipeline to NULL ...
Freeing pipeline ...

However, I am using USB-C display that would typically require me to specify display-id=2 in nvidia sink for gstreamer
config_infer_primary_ssd.txt (3.6 KB) deepstream_app_config_ssd.txt (2.3 KB) ssd_coco_labels.txt (21 Bytes)
https://storage.googleapis.com/gaze-dev/sample_ssd_relu6.uff

@amycao @mchi
the only app that seems to run at my side with default parameters, though is

 /usr/bin/deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Could you extend, how to provide custom pb input in a way it will load from the converted uff file, please?

HI,

ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

Did you use same TensorRT version for building engine and running with the engine?