Hello @Morganh ,
I executed the above commands which you mentioned on Xavier NX, but I didnt get the same result as yours.
I execute the following two commands:
Hello @Morganh, Thank you for providing the information.
This is very strange that I am executing the same commands on the NX and my results are totally different. Do you have an idea why this is so
Also could you provide the jetpack and and the tensort-rt version on your NX, and do you think my results are different from yours because I might have an different jetpack and tensort-rt version ?
My jetpack is 4.6 and tensort-rt is 8.0.6.1
nvidia@ubuntu:/tmp/forum_216454$ dpkg -l |grep cuda
ii cuda-cccl-11-4 11.4.167-1 arm64 CUDA CCCL
ii cuda-command-line-tools-11-4 11.4.14-1 arm64 CUDA command-line tools
ii cuda-compiler-11-4 11.4.14-1 arm64 CUDA compiler
ii cuda-cudart-11-4 11.4.167-1 arm64 CUDA Runtime native Libraries
ii cuda-cudart-dev-11-4 11.4.167-1 arm64 CUDA Runtime native dev links, headers
ii cuda-cuobjdump-11-4 11.4.167-1 arm64 CUDA cuobjdump
ii cuda-cupti-11-4 11.4.167-1 arm64 CUDA profiling tools runtime libs.
ii cuda-cupti-dev-11-4 11.4.167-1 arm64 CUDA profiling tools interface.
ii cuda-cuxxfilt-11-4 11.4.167-1 arm64 CUDA cuxxfilt
ii cuda-documentation-11-4 11.4.167-1 arm64 CUDA documentation
ii cuda-driver-dev-11-4 11.4.167-1 arm64 CUDA Driver native dev stub library
ii cuda-gdb-11-4 11.4.167-1 arm64 CUDA-GDB
ii cuda-libraries-11-4 11.4.14-1 arm64 CUDA Libraries 11.4 meta-package
ii cuda-libraries-dev-11-4 11.4.14-1 arm64 CUDA Libraries 11.4 development meta-package
ii cuda-nvcc-11-4 11.4.166-1 arm64 CUDA nvcc
ii cuda-nvdisasm-11-4 11.4.167-1 arm64 CUDA disassembler
ii cuda-nvml-dev-11-4 11.4.167-1 arm64 NVML native dev links, headers
ii cuda-nvprof-11-4 11.4.166-1 arm64 CUDA Profiler tools
ii cuda-nvprune-11-4 11.4.167-1 arm64 CUDA nvprune
ii cuda-nvrtc-11-4 11.4.166-1 arm64 NVRTC native runtime libraries
ii cuda-nvrtc-dev-11-4 11.4.166-1 arm64 NVRTC native dev links, headers
ii cuda-nvtx-11-4 11.4.166-1 arm64 NVIDIA Tools Extension
ii cuda-samples-11-4 11.4.166-1 arm64 CUDA example applications
ii cuda-sanitizer-11-4 11.4.166-1 arm64 CUDA Sanitizer
ii cuda-toolkit-11-4 11.4.14-1 arm64 CUDA Toolkit 11.4 meta-package
ii cuda-toolkit-11-4-config-common 11.4.167-1 all Common config package for CUDA Toolkit 11.4.
ii cuda-toolkit-11-config-common 11.4.167-1 all Common config package for CUDA Toolkit 11.
ii cuda-toolkit-config-common 11.4.167-1 all Common config package for CUDA Toolkit.
ii cuda-tools-11-4 11.4.14-1 arm64 CUDA Tools meta-package
ii cuda-visual-tools-11-4 11.4.14-1 arm64 CUDA visual tools
ii graphsurgeon-tf 8.4.0-1+cuda11.4 arm64 GraphSurgeon for TensorRT package
ii libcudnn8 8.3.2.49-1+cuda11.4 arm64 cuDNN runtime libraries
ii libcudnn8-dev 8.3.2.49-1+cuda11.4 arm64 cuDNN development libraries and headers
ii libcudnn8-samples 8.3.2.49-1+cuda11.4 arm64 cuDNN samples
ii libnvinfer-bin 8.4.0-1+cuda11.4 arm64 TensorRT binaries
ii libnvinfer-dev 8.4.0-1+cuda11.4 arm64 TensorRT development libraries and headers
ii libnvinfer-doc 8.4.0-1+cuda11.4 all TensorRT documentation
ii libnvinfer-plugin-dev 8.4.0-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.4.0-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.4.0-1+cuda11.4 all TensorRT samples
ii libnvinfer8 8.4.0-1+cuda11.4 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.4.0-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.4.0-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.4.0-1+cuda11.4 arm64 TensorRT parsers libraries
ii libnvparsers8 8.4.0-1+cuda11.4 arm64 TensorRT parsers libraries
ii nvidia-container-csv-cuda 11.4.14-1 arm64 Jetpack CUDA CSV file
ii nvidia-l4t-cuda 34.0.1-20220215194304 arm64 NVIDIA CUDA Package
ii python3-libnvinfer 8.4.0-1+cuda11.4 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.4.0-1+cuda11.4 arm64 Python 3 development package for TensorRT
ii tensorrt 8.4.0.6-1+cuda11.4 arm64 Meta package of TensorRT
ii uff-converter-tf 8.4.0-1+cuda11.4 arm64 UFF converter for TensorRT package
nvidia@nvidia-desktop:~$ dpkg -l | grep cuda
ii cuda-command-line-tools-10-2 10.2.460-1 arm64 CUDA command-line tools
ii cuda-compiler-10-2 10.2.460-1 arm64 CUDA compiler
ii cuda-cudart-10-2 10.2.300-1 arm64 CUDA Runtime native Libraries
ii cuda-cudart-dev-10-2 10.2.300-1 arm64 CUDA Runtime native dev links, headers
ii cuda-cuobjdump-10-2 10.2.300-1 arm64 CUDA cuobjdump
ii cuda-cupti-10-2 10.2.300-1 arm64 CUDA profiling tools runtime libs.
ii cuda-cupti-dev-10-2 10.2.300-1 arm64 CUDA profiling tools interface.
ii cuda-documentation-10-2 10.2.300-1 arm64 CUDA documentation
ii cuda-driver-dev-10-2 10.2.300-1 arm64 CUDA Driver native dev stub library
ii cuda-gdb-10-2 10.2.300-1 arm64 CUDA-GDB
ii cuda-libraries-10-2 10.2.460-1 arm64 CUDA Libraries 10.2 meta-package
ii cuda-libraries-dev-10-2 10.2.460-1 arm64 CUDA Libraries 10.2 development meta-package
ii cuda-memcheck-10-2 10.2.300-1 arm64 CUDA-MEMCHECK
ii cuda-nvcc-10-2 10.2.300-1 arm64 CUDA nvcc
ii cuda-nvdisasm-10-2 10.2.300-1 arm64 CUDA disassembler
ii cuda-nvgraph-10-2 10.2.300-1 arm64 NVGRAPH native runtime libraries
ii cuda-nvgraph-dev-10-2 10.2.300-1 arm64 NVGRAPH native dev links, headers
ii cuda-nvml-dev-10-2 10.2.300-1 arm64 NVML native dev links, headers
ii cuda-nvprof-10-2 10.2.300-1 arm64 CUDA Profiler tools
ii cuda-nvprune-10-2 10.2.300-1 arm64 CUDA nvprune
ii cuda-nvrtc-10-2 10.2.300-1 arm64 NVRTC native runtime libraries
ii cuda-nvrtc-dev-10-2 10.2.300-1 arm64 NVRTC native dev links, headers
ii cuda-nvtx-10-2 10.2.300-1 arm64 NVIDIA Tools Extension
ii cuda-samples-10-2 10.2.300-1 arm64 CUDA example applications
ii cuda-toolkit-10-2 10.2.460-1 arm64 CUDA Toolkit 10.2 meta-package
ii cuda-tools-10-2 10.2.460-1 arm64 CUDA Tools meta-package
ii cuda-visual-tools-10-2 10.2.460-1 arm64 CUDA visual tools
ii graphsurgeon-tf 8.0.1-1+cuda10.2 arm64 GraphSurgeon for TensorRT package
ii libcudnn8 8.2.1.32-1+cuda10.2 arm64 cuDNN runtime libraries
ii libcudnn8-dev 8.2.1.32-1+cuda10.2 arm64 cuDNN development libraries and headers
ii libcudnn8-samples 8.2.1.32-1+cuda10.2 arm64 cuDNN documents and samples
ii libnvinfer-bin 8.0.1-1+cuda10.2 arm64 TensorRT binaries
ii libnvinfer-dev 8.0.1-1+cuda10.2 arm64 TensorRT development libraries and headers
ii libnvinfer-doc 8.0.1-1+cuda10.2 all TensorRT documentation
ii libnvinfer-plugin-dev 8.0.1-1+cuda10.2 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.0.1-1+cuda10.2 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.0.1-1+cuda10.2 all TensorRT samples
ii libnvinfer8 8.0.1-1+cuda10.2 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.0.1-1+cuda10.2 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.0.1-1+cuda10.2 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.0.1-1+cuda10.2 arm64 TensorRT parsers libraries
ii libnvparsers8 8.0.1-1+cuda10.2 arm64 TensorRT parsers libraries
ii nvidia-container-csv-cuda 10.2.460-1 arm64 Jetpack CUDA CSV file
ii nvidia-container-csv-cudnn 8.2.1.32-1+cuda10.2 arm64 Jetpack CUDNN CSV file
ii nvidia-container-csv-tensorrt 8.0.1.6-1+cuda10.2 arm64 Jetpack TensorRT CSV file
ii nvidia-l4t-cuda 32.6.1-20210916210945 arm64 NVIDIA CUDA Package
ii python3-libnvinfer 8.0.1-1+cuda10.2 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.0.1-1+cuda10.2 arm64 Python 3 development package for TensorRT
ii tensorrt 8.0.1.6-1+cuda10.2 arm64 Meta package of TensorRT
ii uff-converter-tf 8.0.1-1+cuda10.2 arm64 UFF converter for TensorRT package
Mine are older version than yours. So do you think this could be the reason ?
Hello @Morganh,
Since my XNX uses a custom carrier board, the Jetpack with newer Tensort-RT and Cuda versions will take time to deliver. For my verification purposes, I would like to request you to try generating the int8 engine file with the .etlt model and calibration file below efficientdet_d2.cal (59.2 KB) model_int8.step-562500.etlt (8.9 MB)
Could you also provide me with the information regarding the engine file size?
Thank you for your assistance
Hello @anshul12256, As the initial question in this topic was solved, I would suggest to close this topic. Feel free to open new topic for further question, thanks.