I am attempting to setup the test4 app with rabbitmq. I have the server working on my host machine fine. When I run deepstream-test4-app it makes a connection with the rabbitmq server and creates the TensorRT file, but does not show the video display and does not send any messages to rabbitmq.
Here is my output:
nano@nano:~/deepstream/sources/apps/sample_apps/deepstream-test4$ ./run-test4-app.sh
(deepstream-test4-app:5226): GLib-CRITICAL **: 08:29:59.385: g_strchug: assertion 'string != NULL' failed
(deepstream-test4-app:5226): GLib-CRITICAL **: 08:29:59.385: g_strchomp: assertion 'string != NULL' failed
Now playing: /home/nano/deepstream/samples/streams/sample_720p.mp4
Using winsys: x11
(deepstream-test4-app:5226): GLib-CRITICAL **: 08:29:59.389: g_strrstr: assertion 'haystack != NULL' failed
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:06.825741020 5226 0x558dd62870 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:06.826012222 5226 0x558dd62870 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:02:16.235384542 5226 0x558dd62870 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Then it stays like this and nothing else happens. Any ideas on how to fix this?