Imagenet running but no display?

I have compiled the hello AI world / imagenet app on my nano (in a docker container running Jetpack 4.3) and I am having an unusual problem. The imagenet app seems to start just fine, but I don’t see anything on the display. it appears to just be running in the background. Any ideas what I am missing? It is using a CSI camera as a source, which I have confirmed works in gstreamer on the nano when just displaying to the screen. Below are the log outputs from imagenet:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video]  created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: csi://0
     - protocol:  csi
     - location:  0
  -- deviceType: csi
  -- ioType:     input
  -- codec:      raw
  -- width:      1280
  -- height:     720
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: rotate-180
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1280x800
[OpenGL] glDisplay -- X window resolution:    1280x800
[OpenGL] glDisplay -- display device initialized (1280x800)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1280
  -- height:     800
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------

imageNet -- loading classification network model from:
         -- prototxt     networks/googlenet.prototxt
         -- model        networks/bvlc_googlenet.caffemodel
         -- class_labels networks/ilsvrc12_synset_words.txt
         -- input_blob   'data'
         -- output_blob  'prob'
         -- batch_size   1

[TRT]    TensorRT version 6.0.1
[TRT]    loading NVIDIA plugins...
[TRT]    Plugin Creator registration succeeded - GridAnchor_TRT
[TRT]    Plugin Creator registration succeeded - GridAnchorRect_TRT
[TRT]    Plugin Creator registration succeeded - NMS_TRT
[TRT]    Plugin Creator registration succeeded - Reorg_TRT
[TRT]    Plugin Creator registration succeeded - Region_TRT
[TRT]    Plugin Creator registration succeeded - Clip_TRT
[TRT]    Plugin Creator registration succeeded - LReLU_TRT
[TRT]    Plugin Creator registration succeeded - PriorBox_TRT
[TRT]    Plugin Creator registration succeeded - Normalize_TRT
[TRT]    Plugin Creator registration succeeded - RPROI_TRT
[TRT]    Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT]    Could not register plugin creator:  FlattenConcat_TRT in namespace:
[TRT]    detected model format - caffe  (extension '.caffemodel')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file /usr/local/bin/networks/bvlc_googlenet.caffemodel.1.1.6001.GPU.FP16.engine
[TRT]    cache file not found, profiling network model on device GPU
[TRT]    device GPU, loading /usr/local/bin/networks/googlenet.prototxt /usr/local/bin/networks/bvlc_googlenet.caffemodel
[TRT]    device GPU, configuring network builder
[TRT]    device GPU, building FP16:  ON
[TRT]    device GPU, building INT8:  OFF
[TRT]    device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
[TRT]    Applying generic optimizations to the graph for inference.
[TRT]    Original: 141 layers
[TRT]    After dead-layer removal: 141 layers
[TRT]    After scale fusion: 141 layers
[TRT]    Fusing conv1/7x7_s2 with conv1/relu_7x7
[TRT]    Fusing conv2/3x3_reduce with conv2/relu_3x3_reduce
[TRT]    Fusing conv2/3x3 with conv2/relu_3x3
[TRT]    Fusing inception_3a/1x1 with inception_3a/relu_1x1
[TRT]    Fusing inception_3a/3x3_reduce with inception_3a/relu_3x3_reduce
[TRT]    Fusing inception_3a/3x3 with inception_3a/relu_3x3
[TRT]    Fusing inception_3a/5x5_reduce with inception_3a/relu_5x5_reduce
[TRT]    Fusing inception_3a/5x5 with inception_3a/relu_5x5
[TRT]    Fusing inception_3a/pool_proj with inception_3a/relu_pool_proj
[TRT]    Fusing inception_3b/1x1 with inception_3b/relu_1x1
[TRT]    Fusing inception_3b/3x3_reduce with inception_3b/relu_3x3_reduce
[TRT]    Fusing inception_3b/3x3 with inception_3b/relu_3x3
[TRT]    Fusing inception_3b/5x5_reduce with inception_3b/relu_5x5_reduce
[TRT]    Fusing inception_3b/5x5 with inception_3b/relu_5x5
[TRT]    Fusing inception_3b/pool_proj with inception_3b/relu_pool_proj
[TRT]    Fusing inception_4a/1x1 with inception_4a/relu_1x1
[TRT]    Fusing inception_4a/3x3_reduce with inception_4a/relu_3x3_reduce
[TRT]    Fusing inception_4a/3x3 with inception_4a/relu_3x3
[TRT]    Fusing inception_4a/5x5_reduce with inception_4a/relu_5x5_reduce
[TRT]    Fusing inception_4a/5x5 with inception_4a/relu_5x5
[TRT]    Fusing inception_4a/pool_proj with inception_4a/relu_pool_proj
[TRT]    Fusing inception_4b/1x1 with inception_4b/relu_1x1
[TRT]    Fusing inception_4b/3x3_reduce with inception_4b/relu_3x3_reduce
[TRT]    Fusing inception_4b/3x3 with inception_4b/relu_3x3
[TRT]    Fusing inception_4b/5x5_reduce with inception_4b/relu_5x5_reduce
[TRT]    Fusing inception_4b/5x5 with inception_4b/relu_5x5
[TRT]    Fusing inception_4b/pool_proj with inception_4b/relu_pool_proj
[TRT]    Fusing inception_4c/1x1 with inception_4c/relu_1x1
[TRT]    Fusing inception_4c/3x3_reduce with inception_4c/relu_3x3_reduce
[TRT]    Fusing inception_4c/3x3 with inception_4c/relu_3x3
[TRT]    Fusing inception_4c/5x5_reduce with inception_4c/relu_5x5_reduce
[TRT]    Fusing inception_4c/5x5 with inception_4c/relu_5x5
[TRT]    Fusing inception_4c/pool_proj with inception_4c/relu_pool_proj
[TRT]    Fusing inception_4d/1x1 with inception_4d/relu_1x1
[TRT]    Fusing inception_4d/3x3_reduce with inception_4d/relu_3x3_reduce
[TRT]    Fusing inception_4d/3x3 with inception_4d/relu_3x3
[TRT]    Fusing inception_4d/5x5_reduce with inception_4d/relu_5x5_reduce
[TRT]    Fusing inception_4d/5x5 with inception_4d/relu_5x5
[TRT]    Fusing inception_4d/pool_proj with inception_4d/relu_pool_proj
[TRT]    Fusing inception_4e/1x1 with inception_4e/relu_1x1
[TRT]    Fusing inception_4e/3x3_reduce with inception_4e/relu_3x3_reduce
[TRT]    Fusing inception_4e/3x3 with inception_4e/relu_3x3
[TRT]    Fusing inception_4e/5x5_reduce with inception_4e/relu_5x5_reduce
[TRT]    Fusing inception_4e/5x5 with inception_4e/relu_5x5
[TRT]    Fusing inception_4e/pool_proj with inception_4e/relu_pool_proj
[TRT]    Fusing inception_5a/1x1 with inception_5a/relu_1x1
[TRT]    Fusing inception_5a/3x3_reduce with inception_5a/relu_3x3_reduce
[TRT]    Fusing inception_5a/3x3 with inception_5a/relu_3x3
[TRT]    Fusing inception_5a/5x5_reduce with inception_5a/relu_5x5_reduce
[TRT]    Fusing inception_5a/5x5 with inception_5a/relu_5x5
[TRT]    Fusing inception_5a/pool_proj with inception_5a/relu_pool_proj
[TRT]    Fusing inception_5b/1x1 with inception_5b/relu_1x1
[TRT]    Fusing inception_5b/3x3_reduce with inception_5b/relu_3x3_reduce
[TRT]    Fusing inception_5b/3x3 with inception_5b/relu_3x3
[TRT]    Fusing inception_5b/5x5_reduce with inception_5b/relu_5x5_reduce
[TRT]    Fusing inception_5b/5x5 with inception_5b/relu_5x5
[TRT]    Fusing inception_5b/pool_proj with inception_5b/relu_pool_proj
[TRT]    After vertical fusions: 84 layers
[TRT]    After final dead-layer removal: 84 layers
[TRT]    Merging layers: inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce
[TRT]    Merging layers: inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce
[TRT]    Merging layers: inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce
[TRT]    Merging layers: inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce
[TRT]    Merging layers: inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce
[TRT]    Merging layers: inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce
[TRT]    Merging layers: inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce
[TRT]    Merging layers: inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce
[TRT]    Merging layers: inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce
[TRT]    After tensor merging: 66 layers
[TRT]    Eliminating concatenation inception_3a/output
[TRT]    Generating copy for inception_3a/1x1 + inception_3a/relu_1x1 || inception_3a/3x3_reduce + inception_3a/relu_3x3_reduce || inception_3a/5x5_reduce + inception_3a/relu_5x5_reduce to inception_3a/output
[TRT]    Retargeting inception_3a/3x3 to inception_3a/output
[TRT]    Retargeting inception_3a/5x5 to inception_3a/output
[TRT]    Retargeting inception_3a/pool_proj to inception_3a/output
[TRT]    Eliminating concatenation inception_3b/output
[TRT]    Generating copy for inception_3b/1x1 + inception_3b/relu_1x1 || inception_3b/3x3_reduce + inception_3b/relu_3x3_reduce || inception_3b/5x5_reduce + inception_3b/relu_5x5_reduce to inception_3b/output
[TRT]    Retargeting inception_3b/3x3 to inception_3b/output
[TRT]    Retargeting inception_3b/5x5 to inception_3b/output
[TRT]    Retargeting inception_3b/pool_proj to inception_3b/output
[TRT]    Eliminating concatenation inception_4a/output
[TRT]    Generating copy for inception_4a/1x1 + inception_4a/relu_1x1 || inception_4a/3x3_reduce + inception_4a/relu_3x3_reduce || inception_4a/5x5_reduce + inception_4a/relu_5x5_reduce to inception_4a/output
[TRT]    Retargeting inception_4a/3x3 to inception_4a/output
[TRT]    Retargeting inception_4a/5x5 to inception_4a/output
[TRT]    Retargeting inception_4a/pool_proj to inception_4a/output
[TRT]    Eliminating concatenation inception_4b/output
[TRT]    Generating copy for inception_4b/1x1 + inception_4b/relu_1x1 || inception_4b/3x3_reduce + inception_4b/relu_3x3_reduce || inception_4b/5x5_reduce + inception_4b/relu_5x5_reduce to inception_4b/output
[TRT]    Retargeting inception_4b/3x3 to inception_4b/output
[TRT]    Retargeting inception_4b/5x5 to inception_4b/output
[TRT]    Retargeting inception_4b/pool_proj to inception_4b/output
[TRT]    Eliminating concatenation inception_4c/output
[TRT]    Generating copy for inception_4c/1x1 + inception_4c/relu_1x1 || inception_4c/3x3_reduce + inception_4c/relu_3x3_reduce || inception_4c/5x5_reduce + inception_4c/relu_5x5_reduce to inception_4c/output
[TRT]    Retargeting inception_4c/3x3 to inception_4c/output
[TRT]    Retargeting inception_4c/5x5 to inception_4c/output
[TRT]    Retargeting inception_4c/pool_proj to inception_4c/output
[TRT]    Eliminating concatenation inception_4d/output
[TRT]    Generating copy for inception_4d/1x1 + inception_4d/relu_1x1 || inception_4d/3x3_reduce + inception_4d/relu_3x3_reduce || inception_4d/5x5_reduce + inception_4d/relu_5x5_reduce to inception_4d/output
[TRT]    Retargeting inception_4d/3x3 to inception_4d/output
[TRT]    Retargeting inception_4d/5x5 to inception_4d/output
[TRT]    Retargeting inception_4d/pool_proj to inception_4d/output
[TRT]    Eliminating concatenation inception_4e/output
[TRT]    Generating copy for inception_4e/1x1 + inception_4e/relu_1x1 || inception_4e/3x3_reduce + inception_4e/relu_3x3_reduce || inception_4e/5x5_reduce + inception_4e/relu_5x5_reduce to inception_4e/output
[TRT]    Retargeting inception_4e/3x3 to inception_4e/output
[TRT]    Retargeting inception_4e/5x5 to inception_4e/output
[TRT]    Retargeting inception_4e/pool_proj to inception_4e/output
[TRT]    Eliminating concatenation inception_5a/output
[TRT]    Generating copy for inception_5a/1x1 + inception_5a/relu_1x1 || inception_5a/3x3_reduce + inception_5a/relu_3x3_reduce || inception_5a/5x5_reduce + inception_5a/relu_5x5_reduce to inception_5a/output
[TRT]    Retargeting inception_5a/3x3 to inception_5a/output
[TRT]    Retargeting inception_5a/5x5 to inception_5a/output
[TRT]    Retargeting inception_5a/pool_proj to inception_5a/output
[TRT]    Eliminating concatenation inception_5b/output
[TRT]    Generating copy for inception_5b/1x1 + inception_5b/relu_1x1 || inception_5b/3x3_reduce + inception_5b/relu_3x3_reduce || inception_5b/5x5_reduce + inception_5b/relu_5x5_reduce to inception_5b/output
[TRT]    Retargeting inception_5b/3x3 to inception_5b/output
[TRT]    Retargeting inception_5b/5x5 to inception_5b/output
[TRT]    Retargeting inception_5b/pool_proj to inception_5b/output
[TRT]    After concat removal: 66 layers
[TRT]    Graph construction and optimization completed in 0.0414486 seconds.
[TRT]    Constructing optimization profile number 0 out of 1
--------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 1.07091
[TRT]    Tactic: 0 time 0.919375
[TRT]    Fastest Tactic: 0 Time: 0.919375
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 5.04237
[TRT]    Tactic: 0 time 0.332031
[TRT]    Fastest Tactic: 0 Time: 0.332031
[TRT]    *************** Autotuning format combination: Float(1,224,50176,150528) -> Float(1,112,12544,802816) ***************
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (LegacySASSConvolution)
[TRT]    Tactic: 0 time 3.55414
[TRT]    Fastest Tactic: 0 Time: 3.55414
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (FusedConvActConvolution)
[TRT]    Tactic: 1 time 6.32964
[TRT]    Tactic: 49 time 3.71888
[TRT]    Tactic: 128 time 3.72315
[TRT]    Fastest Tactic: 49 Time: 3.71888
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CaskConvolution)
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x32_relu_medium_nn_v1
[TRT]    Tactic: 1062367460111450758 time 2.52573
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_large_nn_v1
[TRT]    Tactic: 4337000649858996379 time 2.03492
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x128_relu_medium_nn_v1
[TRT]    Tactic: 4501471010995462441 time 3.93099
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_medium_nn_v1
[TRT]    Tactic: 6645123197870846056 time 2.00141
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x128_relu_large_nn_v1
[TRT]    Tactic: -9137461792520977713 time 3.96133
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x32_relu_large_nn_v1
[TRT]    Tactic: -6092040395344634144 time 2.59784
[TRT]    Fastest Tactic: 6645123197870846056 Time: 2.00141
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaConvolution)
[TRT]    Tactic: 0 time 4.6825
^Creceived SIGINT
[TRT]    Tactic: 1 time 2.61185
[TRT]    Tactic: 2 time 4.2749
[TRT]    Fastest Tactic: 1 Time: 2.61185
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaDepthwiseConvolution)
[TRT]    CudaDepthwiseConvolution has no valid tactics for this config, skipping
[TRT]    >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 6645123197870846056
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_medium_nn_v1
[TRT]
[TRT]    *************** Autotuning format combination: Half(1,224,50176,150528) -> Half(1,112,12544,802816) ***************
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (FusedConvActConvolution)
[TRT]    FusedConvActConvolution has no valid tactics for this config, skipping
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CaskConvolution)
[TRT]    CaskConvolution has no valid tactics for this config, skipping
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaConvolution)
[TRT]    Tactic: 0 time 4.32685
[TRT]    Tactic: 1 time 2.27575
[TRT]    Tactic: 2 time 3.47664
[TRT]    Fastest Tactic: 1 Time: 2.27575
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaDepthwiseConvolution)
[TRT]    CudaDepthwiseConvolution has no valid tactics for this config, skipping
[TRT]    >>>>>>>>>>>>>>> Chose Runner Type: CudaConvolution Tactic: 1
[TRT]
[TRT]    *************** Autotuning format combination: Half(1,224,50176:2,100352) -> Half(1,112,12544:2,401408) ***************
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (LegacySASSConvolution)
[TRT]    Tactic: 0 time 1.10445
[TRT]    Fastest Tactic: 0 Time: 1.10445
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (FusedConvActConvolution)
[TRT]    FusedConvActConvolution has no valid tactics for this config, skipping
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CaskConvolution)
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x32_relu_medium_nn_v1
[TRT]    Tactic: 3564772625446233998 time 1.44518
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x32_relu_large_nn_v1
[TRT]    Tactic: 3650389455493082349 time 1.48325
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x64_relu_medium_nn_v1
[TRT]    Tactic: 7205456024582378848 time 1.12039
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x64_relu_large_nn_v1
[TRT]    Tactic: -6490690591794140522 time 1.12956
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x128_relu_large_nn_v1
[TRT]    Tactic: -4686027666808657977 time 2.21453
[TRT]    conv1/7x7_s2 + conv1/relu_7x7 (hcudnn) Set Tactic Name: maxwell_fp16x2_hcudnn_fp16x2_128x128_relu_medium_nn_v1
[TRT]    Tactic: -3898373634979201110 time 2.1975
[TRT]    Fastest Tactic: 7205456024582378848 Time: 1.12039
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaConvolution)
[TRT]    CudaConvolution has no valid tactics for this config, skipping
[TRT]    --------------- Timing Runner: conv1/7x7_s2 + conv1/relu_7x7 (CudaDepthwiseConvolution)
[TRT]    CudaDepthwiseConvolution has no valid tactics for this config, skipping
[TRT]    >>>>>>>>>>>>>>> Chose Runner Type: LegacySASSConvolution Tactic: 0
[TRT]
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 0.561328
[TRT]    Tactic: 0 time 0.820338
[TRT]    Fastest Tactic: 1002 Time: 0.561328
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 1.69521
[TRT]    Tactic: 0 time 0.657552
[TRT]    Fastest Tactic: 0 Time: 0.657552
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 0.568229
[TRT]    Tactic: 0 time 0.696745
[TRT]    Fastest Tactic: 1002 Time: 0.568229
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 1.69677
[TRT]    Tactic: 0 time 0.643593
[TRT]    Fastest Tactic: 0 Time: 0.643593
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 2.21883
[TRT]    Tactic: 0 time 0.597579
[TRT]    Fastest Tactic: 0 Time: 0.597579
[TRT]    --------------- Timing Runner: <reformat> (Reformat)
[TRT]    Tactic: 1002 time 2.2151
[TRT]    Tactic: 0 time 0.587552
[TRT]    Fastest Tactic: 0 Time: 0.587552
[TRT]    *************** Autotuning format combination: Float(1,112,12544,802816) -> Float(1,56,3136,200704) ***************
[TRT]    --------------- Timing Runner: pool1/3x3_s2 (Pooling)
[TRT]    Tactic: -1 time 0.624636
[TRT]    Fastest Tactic: -1 Time: 0.624636
...

Hi,

Based on your log, it looks like the app is still converting the model into TensorRT engine.
This is an one-time job but it may take minutes to finish.

Would you mind to wait it longer to see if any output appear first?
Thanks.

@AastaLLL Thank you, everything worked fine after letting it load for a few minutes. Rookie mistake…

Good to know it works.
Thanks for the feedback.