Benchmarking additional models on Jetson NX

HI ,

I am new user and looking to run benchmarks on my Xavier NX board . I was able to run the models which are at https://github.com/NVIDIA-AI-IOT/jetson_benchmarks and replicate the scores .

I am looking to run more models and How do I run models which are not in the models list like mobilenet_v2 or yolov3 using the framework ? Any help or pointers will helpful

Hi,

The sample by default will benchmark all the model defined here:

You can add your model to the list as well.

ModelName: your model name
FrameWork: your model format (caffe/onnx/tensorrt)
Devices: 1 for GPU only, 3 for GPU+2DLA
BatchSizeGPU: batch size that used for GPU inference
BatchSizeDLA: batch size that used for DLA inference
WS_GPU: memory available for GPU inference (tuning value)
WS_DLA: memory available for DLA inference (tuning value)
input: input tensor name (required by uff model)
output: output tensor name (required by uff and caffe model)
URL: model download link

Thanks.

Thank you for reply .

How can I use one of the pre-trained models in here https://ngc.nvidia.com/catalog/models/nvidia:trt_onnx_mobilenetv2_1_0_v100_16g_int8 to benchmark

Hi,

Please add the information into the nx-benchmarks.csv shared above.
Thanks.