However after the pipeline is created I just see a log output saying KILLED. I have already performed the steps in the deepstream README to make install sources/libs/nvdsinfer_customparser and execute ./prepare_ds_trtis_model_repo.sh
Here is the output:
jason@xavier:/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-ssd-parser$ LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
2020-05-28 11:16:14.789651: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-05-28 11:16:17.531846: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Opening in BLOCKING MODE
I0528 03:16:18.142904 26365 server.cc:120] Initializing Triton Inference Server
I0528 03:16:18.162633 26365 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0528 03:16:18.162842 26365 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0528 03:16:18.163660 26365 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-05-28 11:16:18.252795: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-05-28 11:16:18.254292: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x265cf560 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-28 11:16:18.254425: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-05-28 11:16:18.254739: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-05-28 11:16:18.255038: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:18.255244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-05-28 11:16:18.255413: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-05-28 11:16:18.255578: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-05-28 11:16:18.338158: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-05-28 11:16:18.455470: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-05-28 11:16:18.600896: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-05-28 11:16:18.682709: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-05-28 11:16:18.682996: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-05-28 11:16:18.683404: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:18.684015: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:18.684417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-05-28 11:16:28.657287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-28 11:16:28.657420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0
2020-05-28 11:16:28.657467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N
2020-05-28 11:16:28.657798: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:28.658142: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:28.658388: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-05-28 11:16:28.658671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4658 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-05-28 11:16:28.666698: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ec85d2390 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-05-28 11:16:28.666839: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Xavier, Compute Capability 7.2
I0528 03:16:30.540033 26365 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
Killed
I am running this from the GUI and with Chrome and other apps closed except for a couple of terminal windows.
Hi @jasonpgf2a,
changing tf_gpu_memory_fraction from 0.6 to 0.5 in /opt/nvidia/deepstream/deepstream/sources/python/apps/deepstream-ssd-parser/dstest_ssd_nopostprocess.txt could help this issue, please give a try.
Hi @mchi ,
after performed the steps in the deepstream README : make install sources/libs/nvdsinfer_customparser and execute ./prepare_ds_trtis_model_repo.sh, I run ‘python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264’,
I got ERROR output:
Creating Pipeline
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-08-06 17:00:29.060179: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Adding elements to Pipeline
Linking elements in the Pipeline
Starting pipeline
Opening in BLOCKING MODE
I0806 09:00:29.497959 10965 server.cc:120] Initializing Triton Inference Server
I0806 09:00:29.511454 10965 server_status.cc:55] New status tracking for model ‘ssd_inception_v2_coco_2018_01_28’
I0806 09:00:29.511642 10965 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0806 09:00:29.512333 10965 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-08-06 17:00:29.596644: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-08-06 17:00:29.598024: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xde1cc00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-06 17:00:29.598129: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-08-06 17:00:29.598398: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-08-06 17:00:29.598721: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:29.598926: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-08-06 17:00:29.599004: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-08-06 17:00:29.599216: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-08-06 17:00:29.603280: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-08-06 17:00:29.604506: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-08-06 17:00:29.609778: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-08-06 17:00:29.613914: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-08-06 17:00:29.614128: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-08-06 17:00:29.614330: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:29.614587: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:29.614675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-08-06 17:00:32.915790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-06 17:00:32.915956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0
2020-08-06 17:00:32.916000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N
2020-08-06 17:00:32.916318: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:32.916598: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:32.916774: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-08-06 17:00:32.916953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3881 MB memory) → physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-08-06 17:00:32.923383: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7eb04270a0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-06 17:00:32.923555: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Xavier, Compute Capability 7.2
I0806 09:00:33.672094 10965 model_repository_manager.cc:837] successfully loaded ‘ssd_inception_v2_coco_2018_01_28’ version 1
** ERROR: gie id: 5 model: ssd_inception_v2_coco_2018_01_28 input or output layers are empty, check trtis model config settings ERROR: failed to initialize backend while ensuring model:ssd_inception_v2_coco_2018_01_28 ready, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:05.296779474 10965 0xf1dda10 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 5]: Error in createNNBackend() <infer_trtis_context.cpp:199> [UID = 5]: failed to initialize trtis backend for model:ssd_inception_v2_coco_2018_01_28, nvinfer error:NVDSINFER_TRTIS_ERROR I0806 09:00:33.675346 10965 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1 I0806 09:00:33.745982 10965 model_repository_manager.cc:816] successfully unloaded ‘ssd_inception_v2_coco_2018_01_28’ version 1 I0806 09:00:33.746183 10965 server.cc:179] Waiting for in-flight inferences to complete. I0806 09:00:33.746230 10965 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests 0:00:05.368937514 10965 0xf1dda10 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger: nvinferserver[UID 5]: Error in initialize() <infer_base_context.cpp:78> [UID = 5]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:05.369028234 10965 0xf1dda10 WARN nvinferserver gstnvinferserver_impl.cpp:439:start: error: Failed to initialize InferTrtIsContext 0:00:05.369100811 10965 0xf1dda10 WARN nvinferserver gstnvinferserver_impl.cpp:439:start: error: Config file path: dstest_ssd_nopostprocess.txt 0:00:05.369810509 10965 0xf1dda10 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference: Config file path: dstest_ssd_nopostprocess.txt**
why this error come? missing config.pbtxt of model?