But now, when i try to run any app the execution gets stuck here:
/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps# ./apps/tao_segmentation/ds-tao-segmentation -c configs/unet_tao/pgie_unet_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Now playing: configs/unet_tao/pgie_unet_tao_config.txt
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/unet_tao/../../models/unet/unet_resnet18.etlt_b1_gpu0_fp16.engine open error
0:00:00.791441536 6213 0x55e694508130 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/unet_tao/../../models/unet/unet_resnet18.etlt_b1_gpu0_fp16.engine failed
0:00:00.791494548 6213 0x55e694508130 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/unet_tao/../../models/unet/unet_resnet18.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.791502885 6213 0x55e694508130 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
When I try to run peopleSegNet, process gets automatically killed after some time:
/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps# ./apps/tao_segmentation/ds-tao-segmentation -c conf
igs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Now playing: configs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/peopleSegNet_tao/../../models/peopleSegNet/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine open error
0:00:00.577486621 139506 0x559dc6c8b330 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/peopleSegNet_tao/../../models/peopleSegNet/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine failed
0:00:00.577531749 139506 0x559dc6c8b330 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps/configs/peopleSegNet_tao/../../models/peopleSegNet/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.577538590 139506 0x559dc6c8b330 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
Killed
I commented below line from config file #model-engine-file=../../models/peopleSegNet/peopleSegNet_resnet50.etlt_b1_gpu0_fp16.engine
/opt/nvidia/deepstream/deepstream-6.0/deepstream_tao_apps# ./apps/tao_segmentation/ds-tao-segmentation -c configs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Now playing: configs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt
0:00:00.362525168 141079 0x55ba8746d730 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
Killed
In both the cases, the process gets killed automatically after some time.
If you do not kill the process, and if system got to circumstances like extreme resource starvation, the process will be killed. you can monitor the system usage like cpu/memory.
On start it uses >1700MB of GPU, > 1GB of RAM. After some time RAM use starts increasing untill it uses all the RAM, and process gets killed automatically.
It’s fine with GPU memory 4G to run deepstream, but your app caused GPU memory up to 3.5G, did you still have any other process which needs GPU? I used unet model, and total GPU memory is aroung 1.7G
GPU memory is not overflown, only 1700 MB GPU is being used. Only RAM is overused. Also, UNET is working fine. I’m having issue with Mask RCNN (PeopleSegMet). Also, not other process is there when running PeopleSegNet with given apps/tao_segmentation/ds-tao-segmentation file.
20539 root 20 0 9.962g 2.539g 589552 S 14.6 2.0 0:04.27 ds-tao-segmenta
The process used total around 10G virtual memory, in physical memory it’s around 2.5G, the remaining 7.5G will be in swap area. did your os partition include swap partition?