I had no issue building the trt-yolo-app (note that I’m not using the deepstream-yolo-app).
I’m using the latest CUDA 10.1 (downloaded last week), along with the latest CuDNN and TensorRT.
The OS is the latest Ubuntu 18.04 LTS.
I ran it with the Darknet jpeg images using the default kFLOAT (i.e. fp32), and it does the detection with no issue. Here’s the command line I ran:
> ./trt-yolo-app --flagfile=config/yolov3.txt
The only mod I made to the yolov3.txt file was to enable:
–print_prediction_info=true
All expected predictions were correct.
My next test was to try int8 which seems to be supported on the GTX 1080 ti.
I modified the yolov3.txt to enable:
–precision=kINT8
–calibration_table_path=data/yolov3-calibration.table
Then I re-ran the following:
> ./trt-yolo-app --flagfile=config/yolov3.txt
And it produced the following errors:
Building the TensorRT Engine...
Using cached calibration table to build the engine
ERROR: ../builder/cudnnBuilder2.cpp (1791) - Misc Error in createRegionScalesFromTensorScales: -1 (Could not find scales for tensor (Unnamed Layer* 0) [Constant]_output.)
ERROR: ../builder/cudnnBuilder2.cpp (1791) - Misc Error in createRegionScalesFromTensorScales: -1 (Could not find scales for tensor (Unnamed Layer* 0) [Constant]_output.)
trt-yolo-app: /home/michael/deepstream_reference_apps/yolo/lib/yolo.cpp:458: void Yolo::createYOLOEngine(nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `m_Engine != nullptr' failed.
Aborted (core dumped)
This same experiment works fine on Jeston Xavier, but not GTX 1080ti.
Any ideas why it’s failing?
As per engineering team, IInt8EntropyCalibrator might not work across different devices. Instead, use IInt8EntropyCalibrator2 to ensure calibration tables are portable across devices.
Hi,
Thank you for the guidance. I made the one change as suggested above, which compiled fine, but now I get an abort. Here’s the output:
Building the TensorRT Engine...
Using cached calibration table to build the engine
terminate called after throwing an instance of 'std::out_of_range'
what(): _Map_base::at
Aborted (core dumped)
By any chance, does the engineering team have time to make that one change and test on a GTX 1080ti on Ubuntu? The may find a set of easy-to-fix errors that they could fix all at once, and would be more time efficient for your support team to handle this in one go.
Our company is only a middle man who sells hardware to the end customer. We only give software guidance to help customers with good out-of-box experience; e.g. understanding how to do optimized inference using TensorRT via the trt-yolo-app. According to the trt-yolo-app exeample, int8 inference should work on any GPU that supports int8 inference, but as I’m finding out it only works on Jetson GPUs.
Since I imagine there are thousands of customers wanting this example to run on more than Jetson, it seems infeasible to expect each customer to know from the ambiguous error reporting they should use IInt8EntropyCalibrator2 and create a home made calibration table.
Can I suggest that if IInt8EntropyCalibrator2 is portable to all GPUs then deprecate the non portable IInt8EntropyCalibrator from TensorRT. Then update NVIDIA’s trt-yolo-app example with an updated calibration table. This will create an excellent out of the box experience for customers.
As for a viable calibration table, does NVIDIA have one they can provide to us now? We are not experts with TensorRT and won’t be able to create a custom calibration table ourselves, since we are just validating out-of-boxes experiences for customers?
According to one of the contributors, if you use TensorRT v5.0 instead of v5.1, the example should work as is for both Jetson and discrete GPUs.
Otherwise, you can generate the new tables as mentioned in my previous comment. It shouldn’t be much different from how you used the table originally in your initial post; just delete the existing calibration table file and provide some image paths in the corresponding calibration images text file.
If you choose to generate a new table, feel free to submit a pull request to the repo so your future customers could make use of this as well. I’ll see if I can get it merged by someone, as the repo is no longer being actively maintained.
Thank you again.
I was able to regenerate the calibration table with the images supplied with the Darknet example. All is working well now. I didn’t need to revert to TensorRT 5.0 which is good.
Michael