Custom Model deployment on deepstream

I have converted one tensorrt engine model from pytorch. It is basically a segementation model . Now when I am trying to run it on deepstream I am getting an error like this

Opening in BLOCKING MODE
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:08.666316719 26834 0x17631b50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-segmentation/trt_fp32_py.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT actual_input 3x64x64
1 OUTPUT kFLOAT output 6x64x64

0:00:08.668154258 26834 0x17631b50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-segmentation/trt_fp32_py.engine
0:00:08.712010507 26834 0x17631b50 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:model_basic.txt sucessfully
NvMMLiteBlockCreate : Block : BlockType = 256
[JPEG Decode] BeginSequence Display WidthxHeight 3150x2100
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)64, height=(int)64
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3885: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:00:09.333505857 26834 0x7f240035e0 WARN nvinfer gstnvinfer.cpp:1376:convert_batch_and_push_to_input_thread: error: NvBufSurfTransform failed with error -2 while converting buffer
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -2 while converting buffer (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1376): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:primary-nvinference-engine
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done

My config files are:
basic_deepstream.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://./videoplayback.mp4
num-sources=1
#drop-frame-interval=2
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=0
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1

#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out4.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

width=640
height=480
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]

enable=1
gpu-id=0
batch-size=1
interval=0
model-engine-file=./trt_fp32_py.engine

nvbuf-memory-type=0
config-file=model_basic.txt

and model_basic.txt is

[property]
gpu-id=0
net-scale-factor=1.0

model-color-format=0
model-engine-file=./trt_fp32_py.engine

infer-dims=3;64;64
#uff-input-order=0
#uff-input-blob-name=data
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=6
interval=0
gie-unique-id=1
network-type=2
#output-blob-names=last

segmentation-threshold=0.0

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

  • Hardware Platform Jetson Nano 4gb
  • Deepstream Vesrion 6.0
  • JetPack Version 4.6.2 (L4T 32.7.2]
  • tensorrt==8.2.1.8
  • I created a Project Folder inside /opt/nvidia/deepstream/deepstream-6.0/sources/Project/configs and inside it i created two custom file deepstream_basic.txt and model_basic.txt(content in these two is given above). I run the command in the folder /opt/nvidia/deepstream/deepstream-6.0/sources/Project$ deepstream-app -c deepstream_basic.txt and i Got the error mentioned above.

I am not using any of the built-in deepstream models. I had my pytoorch model which I converted to ONNX then converted it .engine file. My model is for segmentation. Is there any way to deploy my custom trt engine model and have control over input ? Because in original model for prediction I have to preprocess the input , How can i do it in deepstream?

Seems there is video resize factor exceeds 16.

How to resolve it?

Can you check your pipeline where need resize factor exceeds 16?

sorry for the late reply, My error got solve by resizing the dimension of image inputs.

Glad to know the issue resolved.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.