Implementing DeepStream/ TRT integration by Intels scenario

this is the sample with ssd pb file.
please check the README and code in this sample.

please check the code, python output video file,.

Thank you for following up;
for gstreamer I have to add nvoverlaysink display-id=2 often, for video output as I am on USB-C display. probably I need to add them to the python gstreamer section of some file?
Moreover, I shall locate the README & code to find any clue on how to input custom pb

which readme?
this readme?
It doesn’t tell anything about using own pb.
It just tells how to set up pre-requisites & triton server

/deepstream-ssd-parser$ cat README 

@mchi could you extend how exactly load custom .pb file with triton server, please?
prefferable without the python , but using the triton inference app, if possible? python just adds extra complication as it seemms to me that is not presented when using the c version of the triton inference
also with python?
so at leasty one of the approaces will hopefully work;
UPD: python version doesn’t show video, but it writes video output.

  1. As below step mentioned previously

    1. Prepare models

    cd /opt/nvidia/deepstream/deepstream/samples/


  2. dstest_ssd_nopostprocess.txt under deepstream-ssd-parser

infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 4
backend {
trt_is {
model_name: "ssd_inception_v2_coco_2018_01_28"
version: -1
model_repo {
root: "…/…/…/…/samples/trtis_model_repo"
log_level: 2
tf_gpu_memory_fraction: 0.6
tf_disable_soft_placement: 0

Hi, Thank you for your response
However, ./ seems to download some pre-defined models.
My intention was to load custom pb , e.g

This is a reference sample, you can refer to this sample to inference with your pb.

could you guide through modifying the sample in order to be able to use custom pb file with it, please?

1 Like