what is shape of converter's output with two video

I’m debugging the deepstream work now, I can learn something, but I still want to ask:
1.what is the input shape/resolution(width x height) to custom module if there is more than one video input with different resolution?
2.the result will be different if with or without inference task, as the deep network’s input have another h x w?

I tested this:
video channel resolution final
0 1280x960
1 1280x960
2 1280x720 1280x960
3 1280x720
Can we connect detection inference task in deepstream sample with gray out converter?

It seems it is not allowed if there is no inference task and different video have different resolution, because there is another assertion:

src/framePool.cpp:35: void FrameBuffer::push(uint8_t*, size_t, int, int, int, int, cudaStream_t): Assertion `frameSize <= frameSize_' failed.

previous post is not correct, different resolution videos is not allowed with or without inference task in deepstream.

We also used cuvid video decoding interface, and known that cuvid allow to set output resolution, we want this function is enabled in deepstream as we need resize video before send frame to our own deepstream plugin which encapsulated tensorrt engine, or we need do resize inside deepstream plugin. We want to enhance the work flow performance.
We also want Y plan(gray/luminance channel) of NV12 data without color space converter, but we can’t do it with deepstream 1.0.

Hi,

We have released DeepStreamSDK v1.5 last week and it contains several new features.
Could you give it a try first?
https://developer.nvidia.com/deepstream-sdk-download

Thanks.

Good, I’ve tried the TensorRT 3.0.2, it is actually solved our problem. I will try DeepStreamSDK v1.5 soon.
Thanks.

I downloaded the new version, and found it needs depends updating. We need finish our prototype first, trying DeepStreamSDK v1.5 maybe later.
Thanks.

Did you solve the problem of multi-resolution with new version Deepstream? If solved, could you tell me how to do? Thanks so much!

Sorry I haven’t tried the new version DeepStream.

If so, did you solve the assertion wrong when multi-resolution videos were severed as inputs?

Hi,

I tried deepstream 1.5, every is fine.
platform: Tesla P4+driver 390.30+CUDA-9.0.176(+CUDA-9.0.176.1+CUDA-9.0.176.2)+Video_Codec_SDK_8.0.14+cudnn-7.1.1+TensorRT-3.0.4+deepstream-1.5
Different resolution with inputs still not support. I haven’t add inference module to worker.

[DEBUG][13:42:21] =========== Video Parameters Begin =============
[DEBUG][13:42:21] 	Video codec     : AVC/H.264
[DEBUG][13:42:21] 	Frame rate      : 10/1 = 10 fps
[DEBUG][13:42:21] 	Sequence format : Progressive
[DEBUG][13:42:21] 	Coded frame size: [1280, 720]
[DEBUG][13:42:21] 	Display area    : [0, 0, 1280, 720]
[DEBUG][13:42:21] 	Chroma format   : YUV 420
[DEBUG][13:42:21] =========== Video Parameters End   =============
[DEBUG][13:42:21] =========== Video Parameters Begin =============
[DEBUG][13:42:21] 	Video codec     : AVC/H.264
[DEBUG][13:42:21] 	Frame rate      : 6/1 = 6 fps
[DEBUG][13:42:21] 	Sequence format : Progressive
[DEBUG][13:42:21] 	Coded frame size: [1280, 960]
[DEBUG][13:42:21] 	Display area    : [0, 0, 1280, 960]
[DEBUG][13:42:21] 	Chroma format   : YUV 420
[DEBUG][13:42:21] =========== Video Parameters End   =============
[DEBUG][13:42:21] =========== Video Parameters Begin =============
[DEBUG][13:42:21] 	Video codec     : AVC/H.264
[DEBUG][13:42:21] 	Frame rate      : 6/1 = 6 fps
[DEBUG][13:42:21] 	Sequence format : Progressive
[DEBUG][13:42:21] 	Coded frame size: [1280, 720]
[DEBUG][13:42:21] 	Display area    : [0, 0, 1280, 720]
[DEBUG][13:42:21] 	Chroma format   : YUV 420
[DEBUG][13:42:21] =========== Video Parameters End   =============
----------: src/framePool.cpp:35: void FrameBuffer::push(uint8_t*, size_t, int, int, int, int, cudaStream_t): Assertion `frameSize <= frameSize_' failed.

Resize video to tensor input size is not enough, some task in the pipeline need the original video frame.

Thanks

haifengli, thanks for your reply!