TensorRT: How does Tensorrt accelerate darknet tiny yolov3 through the C++ api?

I have already abtain the onnx model of tiny yolov3,but I don’t know how to use c++ api to call the onnx model.

Some materials say :
TRT C++ API + TRT built-in ONNX parser like other TRT C++ sample, e.g. sampleFasterRCNN, parse yolov3.onnx with TRT built-in ONNX parser and use TRT C++ API to build the engine and do inference.

So I use tensorrt/samples/trtexec/trtexec.cpp:
$ ./trtexec --onnx=yolov3.onnx

I find the do inference process is finished,but there are no test images.I can’t use it to achieve object detection.