Inference error using YOLOv2 on Jetson TX2

As title, I am using a tensorflow version of YOLO model (the model i used is here [url]https://github.com/experiencor/keras-yolo2[/url], I forked it a while ago, but it should be similar at the core). And I keep having wrong result with this model. I have tested some hypothesis, and it seems to be that the model somehow produce wrong output when using tensorrt.

What I changed was to replace Leaky Relu with eltwise and scale as suggested in [url]https://devtalk.nvidia.com/default/topic/990426/jetson-tx1/tensorrt-yolo-inference-error/post/5271228/#5271228[/url]. Still, my model produce result that are way off from what it is supposed to. Any help is appreciated.

Hi,

Could you share more information about your use case with us?
Do you run the model with TensorFlow or TensorRT?

If you are using TensorRT, could you try to inference it with TensorFlow to validate the model first.
You can find a TensorFlow wheel for Jetson here:
[url]https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-9-rc-wheel-with-jetpack-3-2-/[/url]

Thanks.

Hi, I have found the issue, it was because of input orders.

@

can you share the process making the yolov2 using tensort optimization