I was looking at some of the samples in
/usr/src/tensorrt/samples/python/ and trying to run a few. I noticed they do not seem to run faster or slower depending on the power mode of the Jetson. Does this mean they are not using all available cores? How would I go about making a pytorch program that does take full advantage of all cores? Are the provided examples for tensorrt mainly for optimizing a model that has already been trained? Or can tensorrt help with speeding training time too?
Additionally, is using pytorch any faster on a Jetson than using Keras and tensorflow? If I want to make custom models and train them on the Jetson, what is the best, easiest way to start doing that in an optimized way?