tensorRT_optimization tool error

I used the convert-to-uff utility successfully on this model http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz. Then, to perform inferencing with this model,I started with the tensorRT_optimization tool to acquire a .bin file like the one used in the dnn samples.

I input this command:

./tensorRT_optimization --modelType=uff --uffFile=/home/nvidia/Desktop/ssd_mobilenet_coco.uff --outputBlobs=detection_scores,detection_boxes,detection_classes,num_detections --inputDims=3x300x300

and get this error:

nvrm_gpu: Bug 200215060 workaround enabled.
Initializing network optimizer on model /home/nvidia/Desktop/ssd_mobilenet_coco.uff.
Error: DW_INTERNAL_ERROR: Unable to parse uff file.

I am very sure that the inputted information is correct, also, so the issue is likely something else.

Dear bbauerly,
The possible issue could be mismatch of UFF parser versions. Could you confirm if you have used the convert-to-uff from the TensorRT version that is shipped with SDK Manager?

I used the latest TRT version on a different machine to create the uff file (5.1.5.0 GA). Running ‘$ convert-to-uff’ from my Drive PX doesn’t work; how do I check the TRT version on my Drive PX and/or use the ‘convert-to-uff’ tool on the Drive PX?

Dear bbauerly,
Note that last release for DRIVE PX 2 has DW 1.2 which supports TRT 4.0. It seems you have used TRT 5.1.5. Please use TRT which got installed with SDK manager on host machine when you flash your board with DRIVE OS 5.0.10.3.