Nice! I have difficulty interpreting the layers. I initially used a python script and used the pytorch library. For example when i used this script I had this output:
with torch.no_grad():
output, _ = model(image)
print(type(output))
print(len(output))
print(len(output[0]))
print(len(output[0][0]))
print(output)
<class 'torch.Tensor'>
1
130050
57
tensor([[[5.36894e+00, 8.71477e+00, 1.32241e+01, ..., 1.36432e+00, 1.92210e+01, 2.08646e-01],
[1.09083e+01, 9.69437e+00, 2.14988e+01, ..., 5.92880e+00, 1.64280e+01, 2.65798e-01],
[1.91330e+01, 9.81835e+00, 3.46214e+01, ..., 8.89651e+00, 1.37115e+01, 2.66302e-01],
...,
[1.76024e+03, 1.01203e+03, 3.55187e+02, ..., 1.77500e+03, 1.08716e+03, 3.54368e-01],
[1.80393e+03, 1.00688e+03, 2.95170e+02, ..., 1.83554e+03, 1.08505e+03, 3.38366e-01],
[1.86751e+03, 1.01366e+03, 3.67770e+02, ..., 1.84740e+03, 1.09743e+03, 3.18240e-01]]], device='cuda:0')
Can layers be interpreted from this? How?