I set it to fp16 when building_engine, but why does dtype come out as fp32?

Description

A clear and concise description of the bug or issue.

Environment

Jetson TX2 8GB

TensorRT Version: 7.1.3.0
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hello, I created a model in pytorch, converted to onnx, and tried to modify it through graphsurgeon. So far, I know that precision deformation with fp32 should be done without doing anything. And why wouldn’t it be converted when the fp16 flag was set and performed in build_engine?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Existing resnet50 model increased to four existing output nodes via onnx_graphsurgeon

graph torch-jit-export (
  %inputs[FLOAT, 1x3x224x224]
) initializers (
  %497[FLOAT, 64x3x7x7]
  %498[FLOAT, 64]
  %500[FLOAT, 64x64x1x1]
  %501[FLOAT, 64]
  %503[FLOAT, 64x64x3x3]
  %504[FLOAT, 64]
  %506[FLOAT, 256x64x1x1]
  %507[FLOAT, 256]
  %509[FLOAT, 256x64x1x1]
  %510[FLOAT, 256]
  %512[FLOAT, 64x256x1x1]
  %513[FLOAT, 64]
  %515[FLOAT, 64x64x3x3]
  %516[FLOAT, 64]
  %518[FLOAT, 256x64x1x1]
  %519[FLOAT, 256]
  %521[FLOAT, 64x256x1x1]
  %522[FLOAT, 64]
  %524[FLOAT, 64x64x3x3]
  %525[FLOAT, 64]
  %527[FLOAT, 256x64x1x1]
  %528[FLOAT, 256]
  %530[FLOAT, 128x256x1x1]
  %531[FLOAT, 128]
  %533[FLOAT, 128x128x3x3]
  %534[FLOAT, 128]
  %536[FLOAT, 512x128x1x1]
  %537[FLOAT, 512]
  %539[FLOAT, 512x256x1x1]
  %540[FLOAT, 512]
  %542[FLOAT, 128x512x1x1]
  %543[FLOAT, 128]
  %545[FLOAT, 128x128x3x3]
  %546[FLOAT, 128]
  %548[FLOAT, 512x128x1x1]
  %549[FLOAT, 512]
  %551[FLOAT, 128x512x1x1]
  %552[FLOAT, 128]
  %554[FLOAT, 128x128x3x3]
  %555[FLOAT, 128]
  %557[FLOAT, 512x128x1x1]
  %558[FLOAT, 512]
  %560[FLOAT, 128x512x1x1]
  %561[FLOAT, 128]
  %563[FLOAT, 128x128x3x3]
  %564[FLOAT, 128]
  %566[FLOAT, 512x128x1x1]
  %567[FLOAT, 512]
  %569[FLOAT, 256x512x1x1]
  %570[FLOAT, 256]
  %572[FLOAT, 256x256x3x3]
  %573[FLOAT, 256]
  %575[FLOAT, 1024x256x1x1]
  %576[FLOAT, 1024]
  %578[FLOAT, 1024x512x1x1]
  %579[FLOAT, 1024]
  %581[FLOAT, 256x1024x1x1]
  %582[FLOAT, 256]
  %584[FLOAT, 256x256x3x3]
  %585[FLOAT, 256]
  %587[FLOAT, 1024x256x1x1]
  %588[FLOAT, 1024]
  %590[FLOAT, 256x1024x1x1]
  %591[FLOAT, 256]
  %593[FLOAT, 256x256x3x3]
  %594[FLOAT, 256]
  %596[FLOAT, 1024x256x1x1]
  %597[FLOAT, 1024]
  %599[FLOAT, 256x1024x1x1]
  %600[FLOAT, 256]
  %602[FLOAT, 256x256x3x3]
  %603[FLOAT, 256]
  %605[FLOAT, 1024x256x1x1]
  %606[FLOAT, 1024]
  %608[FLOAT, 256x1024x1x1]
  %609[FLOAT, 256]
  %611[FLOAT, 256x256x3x3]
  %612[FLOAT, 256]
  %614[FLOAT, 1024x256x1x1]
  %615[FLOAT, 1024]
  %617[FLOAT, 256x1024x1x1]
  %618[FLOAT, 256]
  %620[FLOAT, 256x256x3x3]
  %621[FLOAT, 256]
  %623[FLOAT, 1024x256x1x1]
  %624[FLOAT, 1024]
  %626[FLOAT, 512x1024x1x1]
  %627[FLOAT, 512]
  %629[FLOAT, 512x512x3x3]
  %630[FLOAT, 512]
  %632[FLOAT, 2048x512x1x1]
  %633[FLOAT, 2048]
  %635[FLOAT, 2048x1024x1x1]
  %636[FLOAT, 2048]
  %638[FLOAT, 512x2048x1x1]
  %639[FLOAT, 512]
  %641[FLOAT, 512x512x3x3]
  %642[FLOAT, 512]
  %644[FLOAT, 2048x512x1x1]
  %645[FLOAT, 2048]
  %647[FLOAT, 512x2048x1x1]
  %648[FLOAT, 512]
  %650[FLOAT, 512x512x3x3]
  %651[FLOAT, 512]
  %653[FLOAT, 2048x512x1x1]
  %654[FLOAT, 2048]
  %fc.weight[FLOAT, 3x2048]
  %fc.bias[FLOAT, 3]
) {
  %496 = Conv[dilations = [1, 1], group = 1, kernel_shape = [7, 7], pads = [3, 3, 3, 3], strides = [2, 2]](%inputs, %497, %498)
  %323 = Relu(%496)
  %324 = MaxPool[kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%323)
  %499 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%324, %500, %501)
  %327 = Relu(%499)
  %502 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%327, %503, %504)
  %330 = Relu(%502)
  %505 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%330, %506, %507)
  %508 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%324, %509, %510)
  %335 = Add(%505, %508)
  %336 = Relu(%335)
  %511 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%336, %512, %513)
  %339 = Relu(%511)
  %514 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%339, %515, %516)
  %342 = Relu(%514)
  %517 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%342, %518, %519)
  %345 = Add(%517, %336)
  %346 = Relu(%345)
  %520 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%346, %521, %522)
  %349 = Relu(%520)
  %523 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%349, %524, %525)
  %352 = Relu(%523)
  %526 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%352, %527, %528)
  %355 = Add(%526, %346)
  %356 = Relu(%355)
  %529 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%356, %530, %531)
  %359 = Relu(%529)
  %532 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%359, %533, %534)
  %362 = Relu(%532)
  %535 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%362, %536, %537)
  %538 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%356, %539, %540)
  %367 = Add(%535, %538)
  %368 = Relu(%367)
  %541 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%368, %542, %543)
  %371 = Relu(%541)
  %544 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%371, %545, %546)
  %374 = Relu(%544)
  %547 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%374, %548, %549)
  %377 = Add(%547, %368)
  %378 = Relu(%377)
  %550 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%378, %551, %552)
  %381 = Relu(%550)
  %553 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%381, %554, %555)
  %384 = Relu(%553)
  %556 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%384, %557, %558)
  %387 = Add(%556, %378)
  %388 = Relu(%387)
  %559 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%388, %560, %561)
  %391 = Relu(%559)
  %562 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%391, %563, %564)
  %394 = Relu(%562)
  %565 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%394, %566, %567)
  %397 = Add(%565, %388)
  %398 = Relu(%397)
  %568 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%398, %569, %570)
  %401 = Relu(%568)
  %571 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%401, %572, %573)
  %404 = Relu(%571)
  %574 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%404, %575, %576)
  %577 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%398, %578, %579)
  %409 = Add(%574, %577)
  %410 = Relu(%409)
  %580 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%410, %581, %582)
  %413 = Relu(%580)
  %583 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%413, %584, %585)
  %416 = Relu(%583)
  %586 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%416, %587, %588)
  %419 = Add(%586, %410)
  %420 = Relu(%419)
  %589 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%420, %590, %591)
  %423 = Relu(%589)
  %592 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%423, %593, %594)
  %426 = Relu(%592)
  %595 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%426, %596, %597)
  %429 = Add(%595, %420)
  %430 = Relu(%429)
  %598 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%430, %599, %600)
  %433 = Relu(%598)
  %601 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%433, %602, %603)
  %436 = Relu(%601)
  %604 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%436, %605, %606)
  %439 = Add(%604, %430)
  %440 = Relu(%439)
  %607 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%440, %608, %609)
  %443 = Relu(%607)
  %610 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%443, %611, %612)
  %446 = Relu(%610)
  %613 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%446, %614, %615)
  %449 = Add(%613, %440)
  %450 = Relu(%449)
  %616 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%450, %617, %618)
  %453 = Relu(%616)
  %619 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%453, %620, %621)
  %456 = Relu(%619)
  %622 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%456, %623, %624)
  %459 = Add(%622, %450)
  %460 = Relu(%459)
  %625 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%460, %626, %627)
  %463 = Relu(%625)
  %628 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%463, %629, %630)
  %466 = Relu(%628)
  %631 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%466, %632, %633)
  %634 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%460, %635, %636)
  %471 = Add(%631, %634)
  %472 = Relu(%471)
  %637 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%472, %638, %639)
  %475 = Relu(%637)
  %640 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%475, %641, %642)
  %478 = Relu(%640)
  %643 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%478, %644, %645)
  %481 = Add(%643, %472)
  %482 = Relu(%481)
  %646 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%482, %647, %648)
  %485 = Relu(%646)
  %649 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%485, %650, %651)
  %488 = Relu(%649)
  %652 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%488, %653, %654)
  %491 = Add(%652, %482)
  %492 = Relu(%491)
  %493 = GlobalAveragePool(%492)
  %494 = Flatten[axis = 1](%493)
  %outputs = Gemm[alpha = 1, beta = 1, transB = 1](%494, %fc.weight, %fc.bias)
  return %outputs, %356, %398, %460
}

I perform build_engine on tx2 with this model, is tx2 converted to fp32 + fp16?

Hi,

This looks like a jetson related. We recommend you to please post your concern on Jetson related forum.

Thank you.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.