Error while running refrerence app inside docker DeepStream 5

I am trying to run deepstream reference app using docker but getting the following error:

root@7457e91ade87:/opt/nvidia/deepstream/deepstream-5.0/samples# sudo vim configs/deepstream-app-trtis/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
root@7457e91ade87:/opt/nvidia/deepstream/deepstream-5.0/samples# deepstream-app -c configs/deepstream-app-trtis/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
2020-05-30 13:06:47.146650: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
WARNING: infer_proto_utils.cpp:118 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
I0530 13:06:49.169504 1595 metrics.cc:162] found 1 GPUs supporting NVML metrics
I0530 13:06:49.174933 1595 metrics.cc:171] GPU 0: Tesla M60
I0530 13:06:49.175109 1595 server.cc:112] Initializing TensorRT Inference Server
E0530 13:06:49.292971 1595 model_repository_manager.cc:1505] instance group Primary_Detector_0 of model Primary_Detector specifies invalid or unsupported gpu id of 0. The minimum required CUDA compute compatibility is 6.000000
ERROR: infer_trtis_server.cpp:526 TRTIS: failed to load model Primary_Detector, trtis_err_str:INTERNAL, err_msg:failed to load ‘Primary_Detector’, no version is available
ERROR: infer_trtis_backend.cpp:42 failed to load model: Primary_Detector, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: infer_trtis_backend.cpp:172 failed to initialize backend while ensuring model:Primary_Detector ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:02.668929286 1595 0x558c56ca3830 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:199> [UID = 1]: failed to initialize trtis backend for model:Primary_Detector, nvinfer error:NVDSINFER_TRTIS_ERROR
I0530 13:06:49.293262 1595 server.cc:180] Waiting for in-flight inferences to complete.
I0530 13:06:49.293278 1595 server.cc:195] Timeout 30: Found 0 live models and 0 in-flight requests
0:00:02.669057584 1595 0x558c56ca3830 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:78> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:02.669077084 1595 0x558c56ca3830 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Failed to initialize InferTrtIsContext
0:00:02.669085984 1595 0x558c56ca3830 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis/config_infer_plan_engine_primary.txt
0:00:02.669176883 1595 0x558c56ca3830 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed
** ERROR: main:651: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to initialize InferTrtIsContext
Debug info: gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis/config_infer_plan_engine_primary.txt
ERROR from primary_gie: gstnvinferserver_impl start failed
Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie
App run failed
W0530 13:06:51.181173 1595 metrics.cc:274] failed to get energy consumption for GPU 0, NVML_ERROR 3

I pulled the docker container from nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton

• Hardware Platform (Jetson / GPU) - Tesla M60
• DeepStream Version - 5.0.0
• TensorRT Version - 7.0.0.11
• NVIDIA GPU Driver Version - 440.33.01

HI,
M60 compute capability is 5.2