Frame Drop on 25fps@720p rtsp feed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Jetson
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only) 6.2
• TensorRT Version 10.7.0.23
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

When i use a 25fps@720p rtsp feed with streammux the fps level drops to 20 to 22 without doing any inference. this works fine with 30fps@720p, Since my camera is 720p, I need the streammux to work well on 720p@25fps feed.

I tried following solution of setting sync-inputs=1

but above made it drop the frames even more.

Video works fine when i without streammux.
gst-launch-1.0 rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! nv3dsink

Lag when i use following or deepstream-app
gst-launch-1.0 rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 batched-push-timeout = 40000 live-source=1 ! nv3dsink

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=4

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=0

[source0]
enable=1
type=4
uri=rtsp://0.0.0.0:8554/mystream
gpu-id=0
cudadec-memtype=0
latency=5000
num-extra-surfaces=10

[streammux]
gpu-id=0
live-source=1
batch-size=1
batched-push-timeout=40000
width=1280
height=720
enable-padding=1
nvbuf-memory-type=0
attach-sys-ts-as-ntp=0
sync-inputs=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0
codec=2
rtsp-port=8556

  1. could you share 30 seconds of the following log? wondering the acutal fps.
gst-launch-1.0  -v  rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! fpsdisplaysink video-sink=fakesink  text-overlay=false sync=false |grep current
  1. about “but above made it drop the frames even more.”, could you share the 30 seconds log of the following pipelne with sync-inputs=1? wondering if the otuput fps decreases. will the output video sttutter or lag? please refer to this topic.
gst-launch-1.0 -v rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 batched-push-timeout = 40000 sync-inputs=1 live-source=1 ! fpsdisplaysink video-sink=fakesink  text-overlay=false sync=false |grep current
  1. Actual FPS

gst-launch-1.0 -v rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! fpsdisplaysink video-sink=fakesink text-overlay=false sync=false |grep current

NvMMLiteOpen : Block : BlockType = 261

NvMMLiteBlockCreate : Block : BlockType = 261

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 21, dropped: 0, current: 27.54, average: 27.54

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 33, dropped: 0, current: 23.44, average: 25.89

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 48, dropped: 0, current: 28.31, average: 26.60

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 61, dropped: 0, current: 25.76, average: 26.42

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 73, dropped: 0, current: 23.21, average: 25.83

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 85, dropped: 0, current: 23.69, average: 25.51

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 98, dropped: 0, current: 25.29, average: 25.48

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 112, dropped: 0, current: 26.70, average: 25.62

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 125, dropped: 0, current: 25.35, average: 25.60

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 139, dropped: 0, current: 27.77, average: 25.80

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 150, dropped: 0, current: 21.70, average: 25.45

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 162, dropped: 0, current: 23.27, average: 25.27

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 175, dropped: 0, current: 25.24, average: 25.27

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 189, dropped: 0, current: 27.73, average: 25.44

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 200, dropped: 0, current: 21.60, average: 25.19

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 212, dropped: 0, current: 23.73, average: 25.10

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 223, dropped: 0, current: 21.81, average: 24.92

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 237, dropped: 0, current: 27.22, average: 25.04

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 250, dropped: 0, current: 25.18, average: 25.05

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 264, dropped: 0, current: 27.22, average: 25.16

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 275, dropped: 0, current: 21.87, average: 25.01

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 289, dropped: 0, current: 26.46, average: 25.07

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 302, dropped: 0, current: 25.43, average: 25.09

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 314, dropped: 0, current: 23.83, average: 25.04

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 327, dropped: 0, current: 25.77, average: 25.06

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 339, dropped: 0, current: 23.21, average: 24.99

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 352, dropped: 0, current: 25.66, average: 25.02

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 364, dropped: 0, current: 23.28, average: 24.96

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 377, dropped: 0, current: 25.32, average: 24.97

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 391, dropped: 0, current: 27.58, average: 25.05

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 402, dropped: 0, current: 21.41, average: 24.94

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 416, dropped: 0, current: 27.22, average: 25.01

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 429, dropped: 0, current: 25.76, average: 25.03

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 443, dropped: 0, current: 26.88, average: 25.09

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 456, dropped: 0, current: 24.88, average: 25.08

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 468, dropped: 0, current: 23.75, average: 25.04

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 481, dropped: 0, current: 25.34, average: 25.05

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 496, dropped: 0, current: 28.27, average: 25.14

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 509, dropped: 0, current: 25.90, average: 25.16

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 521, dropped: 0, current: 23.28, average: 25.11

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 534, dropped: 0, current: 25.84, average: 25.13

  1. Log

gst-launch-1.0 -v rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 batched-push-timeout = 40000 sync-inputs=1 live-source=1 ! fpsdisplaysink video-sink=fakesink text-overlay=false sync=false |grep current

NvMMLiteOpen : Block : BlockType = 261

NvMMLiteBlockCreate : Block : BlockType = 261

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 3, dropped: 0, current: 1.59, average: 1.59

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 54, dropped: 0, current: 76.81, average: 21.20

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 67, dropped: 0, current: 25.72, average: 21.95

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 81, dropped: 0, current: 26.58, average: 22.63

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 92, dropped: 0, current: 21.86, average: 22.54

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 106, dropped: 0, current: 27.21, average: 23.06

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 117, dropped: 0, current: 21.74, average: 22.93

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 131, dropped: 0, current: 27.22, average: 23.32

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 144, dropped: 0, current: 25.24, average: 23.48

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 156, dropped: 0, current: 23.80, average: 23.51

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 169, dropped: 0, current: 25.19, average: 23.63

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 181, dropped: 0, current: 23.32, average: 23.61

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 194, dropped: 0, current: 25.66, average: 23.74

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 208, dropped: 0, current: 27.27, average: 23.94

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 221, dropped: 0, current: 25.21, average: 24.02

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 235, dropped: 0, current: 26.61, average: 24.16

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 248, dropped: 0, current: 25.40, average: 24.22

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 260, dropped: 0, current: 23.58, average: 24.19

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 273, dropped: 0, current: 25.68, average: 24.25

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 285, dropped: 0, current: 23.89, average: 24.24

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 298, dropped: 0, current: 25.20, average: 24.28

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 310, dropped: 0, current: 23.34, average: 24.24

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 323, dropped: 0, current: 25.67, average: 24.30

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 335, dropped: 0, current: 23.40, average: 24.26

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 348, dropped: 0, current: 25.20, average: 24.30

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 362, dropped: 0, current: 27.68, average: 24.41

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 373, dropped: 0, current: 21.37, average: 24.31

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 387, dropped: 0, current: 27.23, average: 24.40

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 398, dropped: 0, current: 21.79, average: 24.32

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 412, dropped: 0, current: 26.64, average: 24.40

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 425, dropped: 0, current: 25.16, average: 24.42

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 437, dropped: 0, current: 23.42, average: 24.39

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 450, dropped: 0, current: 25.65, average: 24.42

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 465, dropped: 0, current: 28.32, average: 24.53

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 478, dropped: 0, current: 25.86, average: 24.57

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 490, dropped: 0, current: 23.27, average: 24.53

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 503, dropped: 0, current: 25.85, average: 24.57

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 515, dropped: 0, current: 23.26, average: 24.53

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 528, dropped: 0, current: 25.69, average: 24.56

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 540, dropped: 0, current: 23.25, average: 24.53

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 554, dropped: 0, current: 27.27, average: 24.59

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 565, dropped: 0, current: 21.68, average: 24.53

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 577, dropped: 0, current: 23.80, average: 24.51

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 590, dropped: 0, current: 25.70, average: 24.54

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 604, dropped: 0, current: 26.54, average: 24.58

Also Deepstream-app with fakesink

streammux_debug.txt (563 Bytes)
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:277>: Pipeline running

**PERF: FPS 0 (Avg)
**PERF: 25.90 (25.59)
**PERF: 19.19 (17.99)
**PERF: 17.09 (17.76)
**PERF: 20.30 (18.72)
**PERF: 17.84 (18.61)

from the printing of the second pipeline, the actual fps is fluent and close to 25. is the video of the following cmd fluent and fine?

gst-launch-1.0 -v rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 batched-push-timeout = 40000 sync-inputs=1 live-source=1 ! nv3dsnk

yeap it works fine.

log.txt (83.6 KB)

the fps of deeptream-app is a little low. please set sync-inputs=1 for [source0], and set sync=1 for [sink0]. then please check if the output video is fine. if the lag issue persists, please refer to this link for performance improvement.

Yes it is sttutering.

streammux_debug.txt (489 Bytes)

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:277>: Pipeline running

Adjusting muxer’s batch push timeout based on FPS of fastest source to 33333

**PERF: FPS 0 (Avg)
**PERF: 14.88 (14.51)
**PERF: 12.16 (12.52)
**PERF: 18.86 (14.72)
**PERF: 12.46 (14.12)
**PERF: 15.05 (14.44)
**PERF: 14.25 (14.45)
**PERF: 12.76 (14.22)
**PERF: 14.75 (14.22)
**PERF: 15.58 (14.32)
**PERF: 17.83 (14.74)
**PERF: 13.77 (14.67)
**PERF: 14.07 (14.65)
**PERF: 14.63 (14.64)
**PERF: 16.51 (14.69)
**PERF: 14.85 (14.73)
**PERF: 16.29 (14.86)
**PERF: 14.44 (14.85)
**PERF: 13.89 (14.80)
**PERF: 16.92 (14.84)
**PERF: 14.49 (14.84)

@fanzh I have followed everything on that. this only happens for 25fps only. i dont see a perfomance bottleneck when i lower the fps to 25 from 30.

why does it sttuter when i lower the fps to 25 frames(my camera is 25fps)

please set type=1 for [sink0] and remove num-extra-surfaces=10.

No improvement

streammux_debug.txt (467 Bytes)

** INFO: <bus_callback:291>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:277>: Pipeline running

Adjusting muxer’s batch push timeout based on FPS of fastest source to 33333

**PERF: FPS 0 (Avg)
**PERF: 13.81 (7.81)
**PERF: 16.38 (15.52)
**PERF: 17.81 (16.76)
**PERF: 14.47 (16.13)
**PERF: 17.66 (16.10)
**PERF: 17.66 (16.47)
**PERF: 12.47 (15.86)
**PERF: 14.11 (15.67)
**PERF: 15.33 (15.68)
**PERF: 16.24 (15.74)
**PERF: 17.16 (15.75)
**PERF: 15.47 (15.75)
**PERF: 15.29 (15.75)
**PERF: 15.63 (15.76)
**PERF: 15.10 (15.73)
**PERF: 15.29 (15.66)
**PERF: 15.66 (15.68)
**PERF: 16.06 (15.72)

Noticing sync is still 0 for [sink0], is streammux_debug.txt updated?

I updated it sync=0 but no improvement

  1. I mean, please set type=2 and sync=1 for [sink0] to check if the output video still stutter.
  2. if the ouptut video still stutters, you can dump and compare the pipeline graph of using gst-luanch and deepstream-app. AKY, using gst-luanch works well. please refer to this faq for how to dump pipeline.
  1. Video still stutters for type=2, sync=1

  2. Graphs,

deepstream-app (stuttering)

streammux_debug.txt (467 Bytes)

gst-launch (no stuttering)

gst-launch-1.0 -v rtspsrc location=rtsp://0.0.0.0:8554/mystream ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 batched-push-timeout = 40000 sync-inputs=1 live-source=1 ! nv3dsnk

hiQ image - 0.00.02.504186227-ds-app-playing.dot.png - Google Drive

  1. the 0.00.02.566150854-gst-launch.PAUSED_PLAYING.dot is not clear. could you share a zip file? Thanks!
  2. to narrow down the issue, could you share 20 seconds of log 1.log after running the following cmd? Thanks!
export GST_DEBUG=3,rtpjitterbuffer:6,h264parse:6,v4l2videodec:6,videodecoder:6,nvstreammux:6 && deepstream-app -c streammux_debug.txt >1.log 2>1.log

As requested,

  1. 0.00.02.566150854-gst-launch.PAUSED_PLAYING.dot.zip (870.9 KB)

  2. Log File - 1.log (17.6 MB)
    config - streammux_debug.txt (467 Bytes)

I really appreciate the help.

rtpjitterbuffer gstrtpjitterbuffer.c:2413:insert_lost_event:<rtpjitterbuffer0>e[00m Packet #8267 lost

As the log in 1.log shown, there are some packet loss when receiving data, which will affect the fps. please add latency=2000 for [source0] in streammux_debug.txt.
If the lag issue persists, please share a new 1.log with the method in the last comment.

Following happens when latency = 2000

streammux_debug.txt (472 Bytes)

1.log (27.4 MB)

(updated my previous reply)

please modify g_object_set (G_OBJECT (bin->src_elem), “drop-on-latency”, TRUE, NULL); to g_object_set (G_OBJECT (bin->src_elem), “drop-on-latency”, FALSE, NULL);
printf(“set drop-on-latency to false”);
in \opt\nvidia\deepstream\deepstream-\sources\apps\apps-common\src\deepstream_source_bin.c, then rebuild deepstream-app according to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app/README. then run the rebuilt deeptream-app.

hmm no improvements.

1.log (32.7 MB)