Given there is .engine file & h5, how to incorporate it into Deepstream?

Thank you for following up!
I am trying with
model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb [GCP aproach]
which is the output from Google AI [ GCP output]
also I am trying with the retrained_graph.pb which is the result of following step-by-step the Intel’s guide [Intels approach]
Howrever, it seems, that I shall try changing output-blob-names=MarkOutput_0 .
Shall it be done for Intels scenario? GCP scenario? both ? Neither of the two?

Moreover, the complication is that there is no comprehensive guide anywhere on how to get from Labeled images dataset [ as In the intels scenario we have from kaggle ~100-500gb dataset with images labeled “bad” or good".
So there is no comprejhensive guide on how to get from images to processing of the model based on teh images in the DeepsTream or TensorRT.
All instructions that I found have several gaps, that eventually prevent two modls that I somehow managed to create to be executed by TRT or DS environment.
It would be useful if there will be a comprehensive instruction on how to do the full cycle - from getting images to getting model created in a way it is supported and then processed by TRT or DS. That is what I am trying to do .

like that?

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD$ cp config_infer_primary_ssd.txt config_infer_primary_ssd.txt_bak
# change the value - updated

then trying to run:

 locate config_infer_primary_ssd.txt
/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_ssd.txt
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps$ cd /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/cbash: cd: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/c: No such file or directory
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps$ cd /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD$ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.442057392 15384   0x559ece28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.442152950 15384   0x559ece28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.442202104 15384   0x559ece28c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

like that?

 deepstream-app -c deepstream_app_config_ssd.txt
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine open error
0:00:01.209769263 15765     0x1d30e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed
0:00:01.210014172 15765     0x1d30e460 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.210168260 15765     0x1d30e460 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82

here I foud to update the model path in the config file

model-engine-file=sample_ssd_relu6.uff
#_b1_gpu0_fp32.engine

 gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.188299112 15989   0x5593df28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.188462385 15989   0x5593df28c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.188596376 15989   0x5593df28c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

another try

 deepstream-app -c deepstream_app_config_ssd.txt
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.190878751 16123     0x364b9c60 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.190945058 16123     0x364b9c60 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.190969028 16123     0x364b9c60 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
#assertionnmsPlugin.cpp,82
Aborted (core dumped)

after another edit:

 gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 !  decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_ssd.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Setting pipeline to PAUSED ...

Using winsys: x11 
ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff
0:00:01.175315185 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed
0:00:01.175372020 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/sample_ssd_relu6.uff failed, try rebuild
0:00:01.175402198 16239   0x5579f3b2c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:01.175621698 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:01.175650851 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:01.175704646 16239   0x5579f3b2c0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:01.175961492 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<nvinfer0> error: Failed to create NvDsInferContext instance
0:00:01.175986805 16239   0x5579f3b2c0 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<nvinfer0> error: Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR: Pipeline doesn't want to pause.
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to create NvDsInferContext instance
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
Config file path: config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Setting pipeline to NULL ...
Freeing pipeline ...

However, I am using USB-C display that would typically require me to specify display-id=2 in nvidia sink for gstreamer
config_infer_primary_ssd.txt (3.6 KB) deepstream_app_config_ssd.txt (2.3 KB) ssd_coco_labels.txt (21 Bytes)
https://storage.googleapis.com/gaze-dev/sample_ssd_relu6.uff

@amycao @mchi
the only app that seems to run at my side with default parameters, though is

 /usr/bin/deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Could you extend, how to provide custom pb input in a way it will load from the converted uff file, please?

HI,

ERROR: [TRT]: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

Did you use same TensorRT version for building engine and running with the engine?

you mean if the converting …pb to uff & then executing the uff file with deepstream happens on the same Jetson? Yes, it is. Although I tried various environements & models including but not limited to docker/non docker, also with two models- from googlew AI & from retraining. Maybe the retraining I done at nx, while converting from pb to uff at AGX. But probably various combinations were used. Moreover, as the effort is open source you may try to repeat the steps to the same state at your siide.
Are the errors reproducible in your environment?
Is there any comprehensible step-by-step guide on how to get from labeled dataset to execution a model from this dataset with trt/deepstream?

from labeled dataset to execution a model ==> this includes:

  1. training based on the labeled dataset ===> this is ‘training’ used TensorFlow, Pyrotch, etc, this is out of the scope DeepStream smart-video inference SDK
  2. Deploy the model trained in above step#1 in DeepStream, there are basically two solutions:
    2.1 transfer the model to onnx, uff or caffe prototxt which are supported by TensorRT so that they can be deployed in DeepStream. TensorRT supports onnx, uff or caffe prototxt , but uff and prototxt will be deprecated in future.
    2.2 just use DeepStream nvinferserver to inference the model, which should be lower perf than using TensorRT in above 2.1.
    ===> for this part, DeepStream does not cover how to transfer the TensorFlow, Pyrotch model to onnx, uff or prototxt. We recommend to transfer to onnx. Onnx is public, you can search the related introduction and transfering articals on ethernet. But, DeepStream covers how to deploy the onnx, uff, prototxt in DeepStream.

The question is not generic like " how to train a model", but how to generate a model in a such particular way so it will be compatible with the nvidia ecosystem imcluding trt/deepstream.
E.g. as for now the most simple way to get the .pb is to generate .pb tensorflow output file via google AI after uplocding dataset to there then training a model with it.
But the resulting file doesn’t work with nvidia solutions

we got two models from different providers. They are both presented here in the thread and could be downloaded. Also we used to convert .pb into onnx.
Neither of the two models as it is indicated in the forum thread did work to any extent with the deepstream. Could you on these models show exactly how to get one of them at least to get running in deepstream?
could you run it at your side at your environment?

Have you got it converted to onnx, uff or prototxt?

could you run it at your side at your environment?

Sorry! Converting model for be runable on TensorRT or Triton needs to be done by user-self. It’s out of DeepStream forum support scope.

Thanks!

yes, we used to get

  1. direct .pb output from Google AI
  2. onnx converted from pb
  3. I believe we also tried .uff files as input

given there is no full instruction starting from labeling an image and finishing playing a model with deepstream, the information provided in this regards seems fragmentary here and there having “out of discourse of the support provided” statements.
From that point of view it would be helpful if there was a primitive example covering the entire process - starting from getting images put to labeled dataset, and ending with how to load that specific output model into deepstream, in my opinion
However, I will try again, but with probability level “highly likely” it wont get further than to various errors

Or, you can try TLT - https://developer.nvidia.com/transfer-learning-toolkit, NVIDIA provides TLT to retrain to TLT model s with the data (labeled images) from user and generates the models that can be deployed in DeepStream easily - https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_TLT_integration.html .

Thank you for the suggestion
we trued for a while to apply it with given inputs as dataset/model/pb file

@mchi
if we got trt from yolov4.
is there any way to process an image with deepstream using multiple engine files?
ref https://github.com/jkjung-avt/tensorrt_demos

what do you mean “multiple engine files”? is it like --> gie–> gie --> … ?

We have deepstream yolov4 sample - https://github.com/NVIDIA-AI-IOT/yolov4_deepstream

@mchi
thank you for your response.
how to use multiple converted trt models created from yolov4 to process the stream? using the referencd sample? else? thanks
e.g. first detect a rectangle at the image with a person based on converted yolov4 -to-trt model A
then in coordinates of the box A detect some different object B using yolo model B
Does it work in serial or in parralel?

gie>
gie>

versus gie->gie ?
both?
neither?

Yes, that’s supported. It’s simiar as deepstream-test2 sample, but both the models for pgie and sgie are replaced with YoloV4 which are trained for different objects.
The model in two gies (pgie and sgie ) run in parallel.