login as: jetson jetson@10.0.0.140's password: Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.9.253-tegra aarch64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage This system has been minimized by removing packages and content that are not required on a system that users do not log into. To restore this content, you can run the 'unminimize' command. 129 updates can be applied immediately. 72 of these updates are standard security updates. To see these additional updates run: apt list --upgradable Last login: Mon Oct 25 05:19:59 2021 from 10.0.0.170 jetson@jetson:~$ echo storing default clocks settings for restoration later storing default clocks settings for restoration later jetson@jetson:~$ sudo jetson_clocks --store ~/default.clocks [sudo] password for jetson: jetson@jetson:~$cd jetson-inference/ jetson@jetson:~/jetson-inference$ docker/run.sh reading L4T version from /etc/nv_tegra_release L4T BSP Version: L4T R32.6.1 size of data/networks: 685298877 bytes CONTAINER: dustynv/jetson-inference:r32.6.1 DATA_VOLUME: --volume /home/jetson/jetson-inference/data:/jetson-inference/data --volume /home/jetson/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/jetson/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/jetson/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/jetson/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models USER_VOLUME: USER_COMMAND: V4L2_DEVICES: xhost: unable to open display "" root@jetson:/jetson-inference# root@jetson:/jetson-inference# root@jetson:/jetson-inference# root@jetson:/jetson-inference# echo with default settings and DVFS enabled classification will fail with default settings and DVFS enabled classification will fail root@jetson:/jetson-inference# cd build/aarch64/bin/ root@jetson:/jetson-inference/build/aarch64/bin# ./imagenet.py images/orange_0.jpg images/test/output_0.jpg jetson.inference -- imageNet loading network using argv command line params imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 1 [TRT] TensorRT version 8.0.1 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::ScatterND version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 226, GPU 3051 (MiB) [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] loading network plan from engine cache... networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] device GPU, loaded networks/bvlc_googlenet.caffemodel [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 247, GPU 3071 (MiB) [TRT] Loaded engine size: 20 MB [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 247 MiB, GPU 3071 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +161, now: CPU 405, GPU 3233 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +239, now: CPU 645, GPU 3472 (MiB) [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3472 (MiB) [TRT] Deserialization required 3485947 microseconds. [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 645 MiB, GPU 3472 MiB [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 645 MiB, GPU 3472 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3472 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 645, GPU 3472 (MiB) [TRT] Total per-runner device memory is 14754304 [TRT] Total per-runner host memory is 89824 [TRT] Allocated activation device memory of size 3612672 [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 646 MiB, GPU 3472 MiB [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 68 [TRT] -- maxBatchSize 1 [TRT] -- deviceMemory 3612672 [TRT] -- bindings 2 [TRT] binding 0 -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 -- dim #1 224 -- dim #2 224 [TRT] binding 1 -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 -- dim #1 1 -- dim #2 1 [TRT] [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 [TRT] [TRT] device GPU, networks/bvlc_googlenet.caffemodel initialized. [TRT] imageNet -- loaded 1000 class info entries [TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized. [video] created imageLoader from file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg ------------------------------------------------ imageLoader video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg - protocol: file - location: images/orange_0.jpg - extension: jpg -- deviceType: file -- ioType: input -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [video] created imageWriter from file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg ------------------------------------------------ imageWriter video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg - protocol: file - location: images/test/output_0.jpg - extension: jpg -- deviceType: file -- ioType: output -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. [image] loaded 'images/orange_0.jpg' (1024x683, 3 channels) Traceback (most recent call last): File "./imagenet.py", line 68, in class_id, confidence = net.Classify(img) Exception: jetson.inference -- imageNet.Classify() encountered an error classifying the image root@jetson:/jetson-inference/build/aarch64/bin# echo no class shown and classification error no class shown and classification error root@jetson:/jetson-inference/build/aarch64/bin# echo c program will not show any error but also will be missing classification on output text - output image will have no probability overlayed on top of it c program will not show any error but also will be missing classification on output text - output image will have no probability overlayed on top of it root@jetson:/jetson-inference/build/aarch64/bin# ./imagenet images/orange_0.jpg images/test/output_0.jpg [video] created imageLoader from file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg ------------------------------------------------ imageLoader video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg - protocol: file - location: images/orange_0.jpg - extension: jpg -- deviceType: file -- ioType: input -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [video] created imageWriter from file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg ------------------------------------------------ imageWriter video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg - protocol: file - location: images/test/output_0.jpg - extension: jpg -- deviceType: file -- ioType: output -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 1 [TRT] TensorRT version 8.0.1 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::ScatterND version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] [MemUsageChange] Init CUDA: CPU +198, GPU +0, now: CPU 224, GPU 3038 (MiB) [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] loading network plan from engine cache... networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] device GPU, loaded networks/bvlc_googlenet.caffemodel [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 244, GPU 3059 (MiB) [TRT] Loaded engine size: 20 MB [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 244 MiB, GPU 3059 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +159, GPU +162, now: CPU 403, GPU 3221 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +241, now: CPU 643, GPU 3462 (MiB) [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 643, GPU 3462 (MiB) [TRT] Deserialization required 3442564 microseconds. [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 643 MiB, GPU 3462 MiB [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 643 MiB, GPU 3462 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 643, GPU 3462 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 643, GPU 3462 (MiB) [TRT] Total per-runner device memory is 14754304 [TRT] Total per-runner host memory is 89824 [TRT] Allocated activation device memory of size 3612672 [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 643 MiB, GPU 3462 MiB [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 68 [TRT] -- maxBatchSize 1 [TRT] -- deviceMemory 3612672 [TRT] -- bindings 2 [TRT] binding 0 -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 -- dim #1 224 -- dim #2 224 [TRT] binding 1 -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 -- dim #1 1 -- dim #2 1 [TRT] [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 [TRT] [TRT] device GPU, networks/bvlc_googlenet.caffemodel initialized. [TRT] imageNet -- loaded 1000 class info entries [TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized. [image] loaded 'images/orange_0.jpg' (1024x683, 3 channels) [image] saved 'images/test/output_0.jpg' (1024x683, 3 channels) [TRT] ------------------------------------------------ [TRT] Timing Report networks/bvlc_googlenet.caffemodel [TRT] ------------------------------------------------ [TRT] Pre-Process CPU 0.08313ms CUDA 1.75849ms [TRT] Network CPU 113.34237ms CUDA 111.00636ms [TRT] Post-Process CPU 0.25678ms CUDA 0.25552ms [TRT] Total CPU 113.68227ms CUDA 113.02037ms [TRT] ------------------------------------------------ [TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements [image] imageLoader -- End of Stream (EOS) has been reached, stream has been closed imagenet: shutting down... imagenet: shutdown complete. root@jetson:/jetson-inference/build/aarch64/bin# echo notice no classification shown in output notice no classification shown in output root@jetson:/jetson-inference/build/aarch64/bin# echo now to exit docker and disable DVFS with jetson_clocks now to exit docker and disable DVFS with jetson_clocks root@jetson:/jetson-inference/build/aarch64/bin# exit exit jetson@jetson:~/jetson-inference$ sudo jetson_clocks jetson@jetson:~/jetson-inference$ docker/run.sh reading L4T version from /etc/nv_tegra_release L4T BSP Version: L4T R32.6.1 size of data/networks: 685298877 bytes CONTAINER: dustynv/jetson-inference:r32.6.1 DATA_VOLUME: --volume /home/jetson/jetson-inference/data:/jetson-inference/data --volume /home/jetson/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/jetson/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/jetson/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/jetson/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models USER_VOLUME: USER_COMMAND: V4L2_DEVICES: xhost: unable to open display "" root@jetson:/jetson-inference# root@jetson:/jetson-inference# cd build/aarch64/bin/ root@jetson:/jetson-inference/build/aarch64/bin# ./imagenet.py images/orange_0.jpg images/test/output_0.jpg jetson.inference -- imageNet loading network using argv command line params imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 1 [TRT] TensorRT version 8.0.1 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::ScatterND version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 226, GPU 3055 (MiB) [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] loading network plan from engine cache... networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] device GPU, loaded networks/bvlc_googlenet.caffemodel [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 247, GPU 3075 (MiB) [TRT] Loaded engine size: 20 MB [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 247 MiB, GPU 3075 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +161, now: CPU 405, GPU 3237 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +241, GPU +240, now: CPU 646, GPU 3477 (MiB) [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3477 (MiB) [TRT] Deserialization required 2602464 microseconds. [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 645 MiB, GPU 3477 MiB [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 645 MiB, GPU 3477 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3477 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 646, GPU 3477 (MiB) [TRT] Total per-runner device memory is 14754304 [TRT] Total per-runner host memory is 89824 [TRT] Allocated activation device memory of size 3612672 [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 646 MiB, GPU 3477 MiB [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 68 [TRT] -- maxBatchSize 1 [TRT] -- deviceMemory 3612672 [TRT] -- bindings 2 [TRT] binding 0 -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 -- dim #1 224 -- dim #2 224 [TRT] binding 1 -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 -- dim #1 1 -- dim #2 1 [TRT] [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 [TRT] [TRT] device GPU, networks/bvlc_googlenet.caffemodel initialized. [TRT] imageNet -- loaded 1000 class info entries [TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized. [video] created imageLoader from file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg ------------------------------------------------ imageLoader video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg - protocol: file - location: images/orange_0.jpg - extension: jpg -- deviceType: file -- ioType: input -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [video] created imageWriter from file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg ------------------------------------------------ imageWriter video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg - protocol: file - location: images/test/output_0.jpg - extension: jpg -- deviceType: file -- ioType: output -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. [image] loaded 'images/orange_0.jpg' (1024x683, 3 channels) class 0950 - 0.966797 (orange) [image] saved 'images/test/output_0.jpg' (1024x683, 3 channels) [TRT] ------------------------------------------------ [TRT] Timing Report networks/bvlc_googlenet.caffemodel [TRT] ------------------------------------------------ [TRT] Pre-Process CPU 0.07453ms CUDA 0.58609ms [TRT] Network CPU 14.83198ms CUDA 14.14344ms [TRT] Post-Process CPU 0.22084ms CUDA 0.22057ms [TRT] Total CPU 15.12736ms CUDA 14.95011ms [TRT] ------------------------------------------------ [TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements root@jetson:/jetson-inference/build/aarch64/bin# echo now that DVFS disabled item is properly classified as class 0950 - 0.966797 orange now that DVFS disabled item is properly classified as class 0950 - 0.966797 orange root@jetson:/jetson-inference/build/aarch64/bin# echo c program will also properly function c program will also properly function root@jetson:/jetson-inference/build/aarch64/bin# ./imagenet images/orange_0.jpg images/test/output_0.jpg [video] created imageLoader from file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg ------------------------------------------------ imageLoader video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg - protocol: file - location: images/orange_0.jpg - extension: jpg -- deviceType: file -- ioType: input -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [video] created imageWriter from file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg ------------------------------------------------ imageWriter video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg - protocol: file - location: images/test/output_0.jpg - extension: jpg -- deviceType: file -- ioType: output -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 1 [TRT] TensorRT version 8.0.1 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::ScatterND version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] [MemUsageChange] Init CUDA: CPU +198, GPU +0, now: CPU 224, GPU 3038 (MiB) [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] loading network plan from engine cache... networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] device GPU, loaded networks/bvlc_googlenet.caffemodel [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 244, GPU 3059 (MiB) [TRT] Loaded engine size: 20 MB [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 244 MiB, GPU 3059 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +160, now: CPU 403, GPU 3219 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +217, now: CPU 643, GPU 3436 (MiB) [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 643, GPU 3436 (MiB) [TRT] Deserialization required 2582920 microseconds. [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 643 MiB, GPU 3436 MiB [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 643 MiB, GPU 3436 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 643, GPU 3436 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 643, GPU 3436 (MiB) [TRT] Total per-runner device memory is 14754304 [TRT] Total per-runner host memory is 89824 [TRT] Allocated activation device memory of size 3612672 [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 643 MiB, GPU 3436 MiB [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 68 [TRT] -- maxBatchSize 1 [TRT] -- deviceMemory 3612672 [TRT] -- bindings 2 [TRT] binding 0 -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 -- dim #1 224 -- dim #2 224 [TRT] binding 1 -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 -- dim #1 1 -- dim #2 1 [TRT] [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 [TRT] [TRT] device GPU, networks/bvlc_googlenet.caffemodel initialized. [TRT] imageNet -- loaded 1000 class info entries [TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized. [image] loaded 'images/orange_0.jpg' (1024x683, 3 channels) class 0950 - 0.966797 (orange) imagenet: 96.67969% class #950 (orange) [image] saved 'images/test/output_0.jpg' (1024x683, 3 channels) [TRT] ------------------------------------------------ [TRT] Timing Report networks/bvlc_googlenet.caffemodel [TRT] ------------------------------------------------ [TRT] Pre-Process CPU 0.08511ms CUDA 0.57734ms [TRT] Network CPU 14.62156ms CUDA 14.01115ms [TRT] Post-Process CPU 0.20511ms CUDA 0.20495ms [TRT] Total CPU 14.91177ms CUDA 14.79344ms [TRT] ------------------------------------------------ [TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements [image] imageLoader -- End of Stream (EOS) has been reached, stream has been closed imagenet: shutting down... imagenet: shutdown complete. root@jetson:/jetson-inference/build/aarch64/bin# echo as shown output confirms classification also working in c program class 0950 - 0.966797 orange as shown output confirms classification also working in c program class 0950 - 0.966797 orange root@jetson:/jetson-inference/build/aarch64/bin# echo now to exit and restore default clocks settings to show it will fail once again now to exit and restore default clocks settings to show it will fail once again root@jetson:/jetson-inference/build/aarch64/bin# exit exit jetson@jetson:~/jetson-inference$ sudo jetson_clocks --restore ~/default.clocks jetson@jetson:~/jetson-inference$ docker/run.sh reading L4T version from /etc/nv_tegra_release L4T BSP Version: L4T R32.6.1 size of data/networks: 685298877 bytes CONTAINER: dustynv/jetson-inference:r32.6.1 DATA_VOLUME: --volume /home/jetson/jetson-inference/data:/jetson-inference/data --volume /home/jetson/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/jetson/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/jetson/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/jetson/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models USER_VOLUME: USER_COMMAND: V4L2_DEVICES: xhost: unable to open display "" root@jetson:/jetson-inference# root@jetson:/jetson-inference# cd build/aarch64/bin/ root@jetson:/jetson-inference/build/aarch64/bin#./imagenet.py images/orange_0.jpg images/test/output_0.jpg jetson.inference -- imageNet loading network using argv command line params imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 1 [TRT] TensorRT version 8.0.1 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::ScatterND version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] [MemUsageChange] Init CUDA: CPU +203, GPU +0, now: CPU 226, GPU 3051 (MiB) [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] loading network plan from engine cache... networks/bvlc_googlenet.caffemodel.1.1.8001.GPU.FP16.engine [TRT] device GPU, loaded networks/bvlc_googlenet.caffemodel [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 247, GPU 3071 (MiB) [TRT] Loaded engine size: 20 MB [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 247 MiB, GPU 3071 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +158, GPU +159, now: CPU 405, GPU 3231 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +240, GPU +245, now: CPU 645, GPU 3476 (MiB) [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3476 (MiB) [TRT] Deserialization required 3511439 microseconds. [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 645 MiB, GPU 3476 MiB [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 645 MiB, GPU 3476 MiB [TRT] Using cublas a tactic source [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 645, GPU 3476 (MiB) [TRT] Using cuDNN as a tactic source [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +1, now: CPU 646, GPU 3477 (MiB) [TRT] Total per-runner device memory is 14754304 [TRT] Total per-runner host memory is 89824 [TRT] Allocated activation device memory of size 3612672 [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 646 MiB, GPU 3477 MiB [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 68 [TRT] -- maxBatchSize 1 [TRT] -- deviceMemory 3612672 [TRT] -- bindings 2 [TRT] binding 0 -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 -- dim #1 224 -- dim #2 224 [TRT] binding 1 -- index 1 -- name 'prob' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1000 -- dim #1 1 -- dim #2 1 [TRT] [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=224 w=224) size=602112 [TRT] binding to output 0 prob binding index: 1 [TRT] binding to output 0 prob dims (b=1 c=1000 h=1 w=1) size=4000 [TRT] [TRT] device GPU, networks/bvlc_googlenet.caffemodel initialized. [TRT] imageNet -- loaded 1000 class info entries [TRT] imageNet -- networks/bvlc_googlenet.caffemodel initialized. [video] created imageLoader from file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg ------------------------------------------------ imageLoader video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/orange_0.jpg - protocol: file - location: images/orange_0.jpg - extension: jpg -- deviceType: file -- ioType: input -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [video] created imageWriter from file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg ------------------------------------------------ imageWriter video options: ------------------------------------------------ -- URI: file:///jetson-inference/build/aarch64/bin/images/test/output_0.jpg - protocol: file - location: images/test/output_0.jpg - extension: jpg -- deviceType: file -- ioType: output -- codec: unknown -- width: 0 -- height: 0 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 -- rtspLatency 2000 ------------------------------------------------ [OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. [image] loaded 'images/orange_0.jpg' (1024x683, 3 channels) Traceback (most recent call last): File "./imagenet.py", line 68, in class_id, confidence = net.Classify(img) Exception: jetson.inference -- imageNet.Classify() encountered an error classifying the image root@jetson:/jetson-inference/build/aarch64/bin# echo as soon as default clocks restored with DVFS enabled classifcation fails once again on this jetson module as soon as default clocks restored with DVFS enabled classifcation fails once again on this jetson module root@jetson:/jetson-inference/build/aarch64/bin# exit exit jetson@jetson:~/jetson-inference$ exit logout