Deepstream4.0.2 based objectdetectorSSD not working

Hi,
I am using deepstream4.0.2 in my desktop ubuntu18.04.
My configuration are-
Deepstream4.0.2
TensorRT6.0.1.5
Cuda10.1
cudnn7.6.4
Gstreamer1.14.5

I successfully installed deepstreamX86-4.0.2 and was trying to run samples.

Issue is-
While running objectDetectorSSD after following all steps in README result is -
Creating LL OSD context new
0:00:05.825863570 26500 0x7f99f8002320 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:05.825905333 26500 0x7f99f8002320 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer ‘MarkOutput_0’ in engine

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:189>: Pipeline ready

**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)

Also i tried objectDetectorYOLO,
there is no error but only black window pop up and stays.

Please suggest any solution.
Thanks.

Can you run deepstream-test1 ?

Hi ChrisDing,
I ran deepstream-test1.

Result is -
Now playing: …/…/…/…/samples/streams/sample_720p.h264
Creating LL OSD context new
0:00:00.483356307 8772 0x55797590bf60 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:00.483665829 8772 0x55797590bf60 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:00.483681668 8772 0x55797590bf60 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): FP16 not supported by platform. Using FP32 mode.
0:00:09.596333256 8772 0x55797590bf60 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/rahul/important/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
Running…

And the same output stays for a while and nothing else is there.

Note-
My GPU specs are-
NVIDIA Corporation GM108M [GeForce 930M] (rev a2)

Is it related due to GPU configuration?
Thanks.

If tensorrt is working on that GPU then DeepStream will also work on it. Or you can upgrade to higher version of GPU like 1080.

And might be a display or setup issue. Try to set sync=0 property on sink component, or change sink to be fakesink.

Hi Chris,
I ran with all mentioned different settings.
output is-
reating LL OSD context new
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:189>: Pipeline ready

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)

Tried storing the output file but “mp4” file is 0 byte.
May be issue is with old version of GPU.

Thanks for the support.