Randomly produced "Segmentation fault (core dumped)" from the pre-build deepstream-app in the official deepstream4.0 docker image

Hi! I used the docker image nvcr.io/nvidia/deepstream:4.0-19.07 to run the deepstram-app app, but it produced the error “Segmentation fault (core dumped)” randomly. The followings are the detail I did:

1, download the docker image and create a container;
2, in the container, run “deepstream-app -c movie_test.txt”, the config file “movie_test.txt” is presented at the buttom, with the following modifications:

  • the sources are changed from the sample.mkv to six movies;
  • and the plugins tracker and dsexample are added;
  • batched-push-timeout=40;
  • batch-size=30;
  • 3, cases are tried: (1) in [dsexample], enable=0/1; (2) used the pre-build command "deepstream-app -c ..."; (3) also, enter the dir "/sources/apps/sample_apps/deepstream-app", run "make", then use the command "./deepstream-app -c ...".

    but all of them produced the error “Segmentation fault (core dumped)” after a while, from 5minutes to 20 minutes.

    my GPU is 1080Ti, with Driver Version: 418.39, CUDA Version: 10.1.

    Please help me to fix this error, thank you in advance!

    The config file I used:

    [application]
    enable-perf-measurement=1
    perf-measurement-interval-sec=5
    #gie-kitti-output-dir=streamscl
    
    [tiled-display]
    enable=1
    rows=3
    columns=3
    width=1280
    height=720
    gpu-id=0
    #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
    #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
    #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
    #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
    #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
    nvbuf-memory-type=0
    
    [ds-example]
    enable=1
    processing-width=640
    processing-height=360
    full-frame=1
    unique-id=15
    gpu-id=0
    
    [source0]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=2
    uri=file://../../streams/GTO.11.FeiXue.x264-OGG.mkv
    num-sources=1
    drop-frame-interval=0
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    camera-id=901
    
    [source1]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=2
    uri=file://../../streams/GTO.01.FeiXue.x264-OGG.mkv
    drop-frame-interval=0
    num-sources=1
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    camera-id=902
    
    [source2]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=2
    uri=file://../../streams/GTO.02.FeiXue.x264-OGG.mkv
    num-sources=1
    drop-frame-interval=0
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    camera-id=903
    
    [source3]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=3
    uri=file://../../streams/GTO.03.FeiXue.x264-OGG.mkv
    num-sources=1
    drop-frame-interval=0
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    
    [source4]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=3
    uri=file://../../streams/GTO.04.FeiXue.x264-OGG.mkv
    num-sources=1
    drop-frame-interval=0
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    
    [source5]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
    type=3
    uri=file://../../streams/GTO.05.FeiXue.x264-OGG.mkv
    num-sources=1
    drop-frame-interval=0
    gpu-id=0
    num-extra-surfaces=0
    # (0): memtype_device   - Memory type Device
    # (1): memtype_pinned   - Memory type Host Pinned
    # (2): memtype_unified  - Memory type Unified
    cudadec-memtype=0
    
    [sink0]
    enable=1
    #Type - 1=FakeSink 2=EglSink 3=File
    type=2
    sync=0
    source-id=0
    gpu-id=0
    nvbuf-memory-type=0
    
    [sink1]
    enable=0
    type=3
    #1=mp4 2=mkv
    container=1
    #1=h264 2=h265
    codec=1
    sync=0
    #iframeinterval=10
    bitrate=2000000
    output-file=out.mp4
    source-id=0
    
    [sink2]
    enable=0
    #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
    type=4
    #1=h264 2=h265
    codec=1
    sync=0
    bitrate=4000000
    # set below properties in case of RTSPStreaming
    rtsp-port=8554
    udp-port=5400
    
    [osd]
    enable=1
    gpu-id=0
    border-width=1
    text-size=15
    text-color=1;1;1;1;
    text-bg-color=0.3;0.3;0.3;1
    font=Serif
    show-clock=0
    clock-x-offset=800
    clock-y-offset=820
    clock-text-size=12
    clock-color=1;0;0;0
    nvbuf-memory-type=0
    
    [streammux]
    gpu-id=0
    ##Boolean property to inform muxer that sources are live
    live-source=0
    batch-size=300
    ##time out in usec, to wait after the first buffer is available
    ##to push the batch even if the complete batch is not formed
    batched-push-timeout=40
    ## Set muxer output width and height
    width=1920
    height=1080
    ##Enable to maintain aspect ratio wrt source, and allow black borders, works
    ##along with width, height properties
    enable-padding=0
    nvbuf-memory-type=0
    
    # config-file property is mandatory for any gie section.
    # Other properties are optional and if set will override the properties set in
    # the infer config file.
    [primary-gie]
    enable=1
    gpu-id=0
    model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
    #Required to display the PGIE labels, should be added even when using config-file
    #property
    batch-size=30
    #Required by the app for OSD, not a plugin property
    bbox-border-color0=1;0;0;1
    bbox-border-color1=0;1;1;1
    bbox-border-color2=0;0;1;1
    bbox-border-color3=0;1;0;1
    interval=0
    #Required by the app for SGIE, when used along with config-file property
    gie-unique-id=1
    nvbuf-memory-type=0
    config-file=config_infer_primary.txt
    
    [tracker]
    enable=1
    tracker-width=640
    tracker-height=368
    #ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
    ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
    #ll-config-file required for DCF/IOU only
    ll-config-file=tracker_config.yml
    #ll-config-file=iou_config.txt
    gpu-id=0
    #enable-batch-process applicable to DCF only
    enable-batch-process=1
    
    [tests]
    file-loop=1
    

    my GPU is 1080Ti, with Driver Version: 418.39, CUDA Version: 10.1

    Hi
    How about the failure rate, I use same config from your post, just replace the video file to builtin samples/streams/sample_1080p_h264.mp4, i can not repro it within 50 mins, btw, i use T4 card, and driver
    version is 418.67, can you use gdb to grab the call stack when the issue happened?
    when run docker, add this option to docker command to enable gdb permisson, --security-opt seccomp=unconfined

    Hi amycao, thank you for your help. Now I use a new machine, with

  • T4 GPU
  • NVIDIA-SMI 418.87.00 Driver Version: 418.87.00, CUDA Version: 10.1
  • I will test again the deepstream-app and report the results later.

    Hi amycao, with the following env, I run 3 times of deepstream-app app, but all of them produced errors.

  • T4 GPU
  • NVIDIA-SMI 418.87.00 Driver Version: 418.87.00, CUDA Version: 10.1
  • Belows are the errors I got with gdb. the config file is the same in #1. I hope the following error messages can help in solving my problem.

    after creating a new container,

    the first case:

  • cd /sources/apps/sample_apps/deepstream-app
  • make
  • # gdb deepstream-app
  • it produced the following error:

    in the gdb env:
    
    run -c ../../../../samples/configs/deepstream-app/movie_test.txt
    
    ...
    ...
    [New Thread 0x7ffdf5fff700 (LWP 2689)]
    [Thread 0x7ffddbfff700 (LWP 2688) exited]
    [New Thread 0x7ffe1effd700 (LWP 2690)]
    [Thread 0x7ffdf5fff700 (LWP 2689) exited]
    [Thread 0x7ffe1effd700 (LWP 2690) exited]
    
    Thread 9 "deepstream-app" received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7fff75fff700 (LWP 51)]
    0x00007fffccd3bf74 in std::_Rb_tree_increment(std::_Rb_tree_node_base*) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    (gdb) list
    1	rtsp-auth.c: No such file or directory.
    (gdb)
    

    the second case:

  • after the error in the first case, i modified the MakeFile in '/sources/apps/sample_apps/deepstream-app', adding the '-g' option in line 59, as follow:
  • $(APP): 
    58 $(OBJS) Makefile
    59 	$(CC) -g -o $(APP) $(OBJS) $(LIBS)
    
  • then run 'make' to create the deepstream-app app
  • # gdb deepstream-app
  • this time it produced the following error:

    ...
    ...
    [New Thread 0x7ffe157fe700 (LWP 27376)]
    [Thread 0x7ffdfdfff700 (LWP 27375) exited]
    [Thread 0x7ffe157fe700 (LWP 27376) exited]
    [New Thread 0x7ffe157fe700 (LWP 27377)]
    [New Thread 0x7ffdfdfff700 (LWP 27378)]
    [Thread 0x7ffe157fe700 (LWP 27377) exited]
    [Thread 0x7ffdfdfff700 (LWP 27378) exited]
    
    !![ERROR] map::at
    [Thread 0x7ffe05fff700 (LWP 2865) exited]
    [Thread 0x7ffe14ffd700 (LWP 2864) exited]
    [Thread 0x7ffe15fff700 (LWP 2862) exited]
    [Thread 0x7ffe1cffd700 (LWP 2861) exited]
    [Thread 0x7ffe1d7fe700 (LWP 2860) exited]
    [Thread 0x7ffe1dfff700 (LWP 2859) exited]
    ...
    ...
    [Thread 0x7fff9c88c700 (LWP 2807) exited]
    [Thread 0x7fff9d08d700 (LWP 2806) exited]
    [Thread 0x7fff9d88e700 (LWP 2805) exited]
    [Thread 0x7fff9e28f700 (LWP 2804) exited]
    [Thread 0x7fffb8c39700 (LWP 2803) exited]
    [Thread 0x7ffff7fbb300 (LWP 2799) exited]
    [Inferior 1 (process 2799) exited normally]
    (gdb) list
    1	rtsp-auth.c: No such file or directory.
    (gdb)
    

    the third case

    the same as case 2 but wiht [dsexample].enable=1 in the config file

    it produced the following error:

    ...
    [Thread 0x7ffd2e7fc700 (LWP 42605) exited]
    [New Thread 0x7ffd2e7fc700 (LWP 42606)]
    [New Thread 0x7ffe197fa700 (LWP 42607)]
    [Thread 0x7ffd2e7fc700 (LWP 42606) exited]
    [Thread 0x7ffe197fa700 (LWP 42607) exited]
    [New Thread 0x7ffe197fa700 (LWP 42608)]
    [New Thread 0x7ffd2e7fc700 (LWP 42609)]
    [Thread 0x7ffe197fa700 (LWP 42608) exited]
    [Thread 0x7ffd2e7fc700 (LWP 42609) exited]
    [New Thread 0x7ffd2e7fc700 (LWP 42610)]
    [Thread 0x7ffd2e7fc700 (LWP 42610) exited]
    
    Thread 10 "deepstream-app" received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7fff75fff700 (LWP 30179)]
    0x00007fff91aaeb2d in NvDCF::deleteDuplicateTrackers() () from /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    (gdb) list
    1	rtsp-auth.c: No such file or directory.
    (gdb)
    

    which is the same as case 1.

    Do you do below, seems you miss some rtsp related files, also can you input bt when issue happened within gdb
    sudo apt-get install
    libssl1.0.0
    libgstreamer1.0-0
    gstreamer1.0-tools
    gstreamer1.0-plugins-good
    gstreamer1.0-plugins-bad
    gstreamer1.0-plugins-ugly
    gstreamer1.0-libav
    libgstrtspserver-1.0-0
    libjansson4

    Hi, amycao, I start a new container and install the libs above, but I still get error. However, if I change the tracker algorithm from DCF to IOU or KLT, the deepstream-app makes no error any more. So I wonder if there is some bug in the lib libnvds_nvdcf. Below I attach the gdb debug info, hope it make sence.

    the config is the same as that in #1

    I have try 2 cases.

    case 1:

    1. strart a new container
    2. apt-get update
    3. apt-get install
    4. run gdb with the pre-build deepstream-app

    the error info is as follow:

    [New Thread 0x7ffd15ffb700 (LWP 12968)]
    [New Thread 0x7ffd16ffd700 (LWP 12969)]
    [Thread 0x7ffd15ffb700 (LWP 12968) exited]
    [Thread 0x7ffd16ffd700 (LWP 12969) exited]
    [New Thread 0x7ffd16ffd700 (LWP 12970)]
    [New Thread 0x7ffd15ffb700 (LWP 12971)]
    [Thread 0x7ffd16ffd700 (LWP 12970) exited]
    [Thread 0x7ffd15ffb700 (LWP 12971) exited]
    [New Thread 0x7ffd15ffb700 (LWP 12972)]
    [Thread 0x7ffd15ffb700 (LWP 12972) exited]
    
    Thread 10 "deepstream-app" received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7fff75fff700 (LWP 28027)]
    0x00007fff91aaeb2d in NvDCF::deleteDuplicateTrackers() () from /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    (gdb) bt
    #0  0x00007fff91aaeb2d in NvDCF::deleteDuplicateTrackers() () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #1  0x00007fff91ab7d90 in NvDCF::update(_NvMOTProcessParams const*, std::map<unsigned long, cv::Mat, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, cv::Mat> > > const&, _NvMOTTrackedObjBatch*&) ()
        at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #2  0x00007fff91ac6a7d in NvMOTContext::processFrame(_NvMOTProcessParams const*, _NvMOTTrackedObjBatch*) ()
        at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #3  0x00007fff91ac7cae in NvMOT_Process () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #4  0x00007fffb00bd988 in NvTrackerProc::processBatch() () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #5  0x00007fffb00c21f0 in void std::__invoke_impl<void, void (NvTrackerProc::*)(), NvTrackerProc*>(std::__invoke_memfun_deref, void (NvTrackerProc::*&&)(), NvTrackerProc*&&) () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #6  0x00007fffb00bf300 in std::__invoke_result<void (NvTrackerProc::*)(), NvTrackerProc*>::type std::__invoke<void (NvTrackerProc::*)(), NvTrackerProc*>(void (NvTrackerProc::*&&)(), NvTrackerProc*&&) () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #7  0x00007fffb00d6271 in decltype (__invoke((_S_declval<0ul>)(), (_S_declval<1ul>)())) std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) ()
        at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #8  0x00007fffb00d61a8 in std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> >::operator()() ()
        at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #9  0x00007fffb00d6130 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> > >::_M_run() () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #10 0x00007fffccd4e66f in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    #11 0x00007fffeb79d6db in start_thread () at /lib/x86_64-linux-gnu/libpthread.so.0
    #12 0x00007ffff608a88f in clone () at /lib/x86_64-linux-gnu/libc.so.6
    (gdb)
    

    case 2

    1. base on case 1, rebuild deepstream-app app from the source code
    2. gdb debug the rebuild deepstream-app in dir 'sources/apps/sample_apps/deepstream-app'

    the error info is as follow:

    use tracker: ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    [Thread 0x7ffd28ff9700 (LWP 46404) exited]
    [Thread 0x7ffcf97fe700 (LWP 46405) exited]
    [New Thread 0x7ffcf97fe700 (LWP 46406)]
    [New Thread 0x7ffd28ff9700 (LWP 46407)]
    [Thread 0x7ffcf97fe700 (LWP 46406) exited]
    [Thread 0x7ffd28ff9700 (LWP 46407) exited]
    
    Thread 10 "deepstream-app" received signal SIGSEGV, Segmentation fault.
    [Switching to Thread 0x7fff75fff700 (LWP 3457)]
    0x00007fffccd3af63 in std::_Rb_tree_increment(std::_Rb_tree_node_base*) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    (gdb) bt
    #0  0x00007fffccd3af63 in std::_Rb_tree_increment(std::_Rb_tree_node_base*) () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    #1  0x00007fff91aaebc5 in NvDCF::deleteDuplicateTrackers() () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #2  0x00007fff91ab7d90 in NvDCF::update(_NvMOTProcessParams const*, std::map<unsigned long, cv::Mat, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, cv::Mat> > > const&, _NvMOTTrackedObjBatch*&) () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #3  0x00007fff91ac6a7d in NvMOTContext::processFrame(_NvMOTProcessParams const*, _NvMOTTrackedObjBatch*) () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #4  0x00007fff91ac7cae in NvMOT_Process () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
    #5  0x00007fffb00bd988 in NvTrackerProc::processBatch() () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #6  0x00007fffb00c21f0 in void std::__invoke_impl<void, void (NvTrackerProc::*)(), NvTrackerProc*>(std::__invoke_memfun_deref, void (NvTrackerProc::*&&)(), NvTrackerProc*&&) ()
        at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #7  0x00007fffb00bf300 in std::__invoke_result<void (NvTrackerProc::*)(), NvTrackerProc*>::type std::__invoke<void (NvTrackerProc::*)(), NvTrackerProc*>(void (NvTrackerProc::*&&)(), NvTrackerProc*&&) () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #8  0x00007fffb00d6271 in decltype (__invoke((_S_declval<0ul>)(), (_S_declval<1ul>)())) std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #9  0x00007fffb00d61a8 in std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> >::operator()() ()
        at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #10 0x00007fffb00d6130 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (NvTrackerProc::*)(), NvTrackerProc*> > >::_M_run() ()
        at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_tracker.so
    #11 0x00007fffccd4e66f in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    #12 0x00007fffeb79d6db in start_thread () at /lib/x86_64-linux-gnu/libpthread.so.0
    #13 0x00007ffff608a88f in clone () at /lib/x86_64-linux-gnu/libc.so.6
    (gdb) list
    1	rtsp-auth.c: No such file or directory.
    (gdb)
    

    I use same config with you, tracker use DCF, but i can not repro your issue, i use different stream as you,
    can you share the stream to me to repro the issue and further check?

    Of course. Here are the six movies I used for testing. I upload them to BaiduPan, the extraction link and code is:

    linkhttps://pan.baidu.com/s/1hDEgg44TAbOOWnvitIQ7xQ
    Extraction code:6baz

    You may need to install the baidupan client to download them. here is the client link if you need:

    https://pan.baidu.com/

    HI
    I can repro the issue with your videos, and we are looking into it, will update once have progress.

    HI
    Can you try latest release 4.0.1, this issue fixed in this version

    Hi, I got some trouble in using the image “nvcr.io/nvidia/deepstream:4.0.1-19-devel”. I download the image in a laptop without GPU, then use the following “save” command to save an image tar file:

    docker save -o /path/deepstream401devel.tar nvcr.io/nvidia/deepstream:4.0.1-19-devel
    

    After this, I load this tar file in a GPU server with GPU 1080TI and T4, respectively, using the following command:

    docker load -i /path/deepstream401devel.tar
    

    then I get the following error:

    open /data/docker/docker/tmp/docker-import-257678482/d4a66b5d5e3acc2bbf0052d3b8a8ef60b5e38e3d10454f1c38fb611db3ec3cc0/json: no such file or directory
    

    However, this error dosen’t happen for images deepstream:4.0.1-19-base, deepstream:4.0.1-19-samples.

    Can you tell me where the problem is ?