AttributeError: 'NoneType' object has no attribute 'create_execution_context'.How to create resnettrt.pth from a resnet.pth

Hello.I tried to launch resnet pose_estimation programm in the docker container.I got error:

[12/13/2023-11:01:18] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReader::35] Error Code 1: Serialization (Serialization assertion safeVersionRead == safeSerializationVersion failed.Version tag does not match. Note: Current Version: 0, Serialized Engine Version: 87)
[12/13/2023-11:01:19] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
File “pose_estimation_run.py”, line 166, in
pose_estimation = PoseEstimation()
File “pose_estimation_run.py”, line 29, in init
self.model_trt.load_state_dict(torch.load(OPTIMIZED_MODEL))
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 1468, in load_state_dict
load(self)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py”, line 1463, in load
state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
File “/usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py”, line 591, in _load_from_state_dict
self.context = self.engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’

Full log is here:

log11.txt (2.9 KB)

As far as i am concerned, i have wrong resnet18_baseline_att_224x224_A_epoch_249_trt.pth file, which was greated on a different tensorRT version.And i need to create resnet18_baseline_att_224x224_A_epoch_249_trt.pth file from a resnet18_baseline_att_224x224_A_epoch_249.pth file in the docker container on my current TensoRT version.

Could you explain me, please, how can i create resnet18_baseline_att_224x224_A_epoch_249_trt.pth file from a resnet18_baseline_att_224x224_A_epoch_249.pth file in the docker container ?

Hi,

Do you use Orin series? It seems that your GPU architecture is 87 instead of 53.

Error Code 1: Serialization (Serialization assertion safeVersionRead == safeSerializationVersion failed.Version tag does not match. Note: Current Version: 0, Serialized Engine Version: 87)

Based on the above message, this is the compatibility issue.

Please create the TensorRT engine from the same device.
And please launch the container with --runtime nvidia to enable GPU access.

Thanks.

Thanks fo the reply.

I know only, that my jetson model is P3450. (If this information is no enough, could you explain me, please, how can i get necessary information ?)

Could you ,please, give me an instruction of the TensorRT engine creation inside of the docker container ?

Hi,

The engine is built for Orin (87) but your device is Nano (53).
So it’s incompatible.

The easiest way to build a TensorRT engine is trtexec.
For example:

/usr/src/tensorrt/bin/trtexec --onnx=[model] --saveEngine=[output]

Thanks.

Hello.Unfortuantelly i couldnt do that, because i dont have the onnx.

I tried to install it 2 times.In the container, which was launched by the command docker/run.sh -c trtpose2 --volume.Log is here:
log2.txt (9.4 KB)

And in the container, which waas launched by the command: sudo docker run -it --runtime nvidia trtpose2.Log is here:
log3.txt (7.1 KB)

How can i install it ?Maybe i need specific version or additional packages ?

Hi,

You don’t need to install ONNX to feed a .onnx file into the trtexec.

Based on your log, you are trying to use jetson-inference.
Could you share which sample you are using?

Is your model “resnet18_baseline_att_224x224_A_epoch_249.pth”?
If yes, please convert the .pth model into .onnx with PyTorch.
Then you can feed the .onnx model to the trtexec directly.

Thanks.

Hello. Yes, my model is resnet18_baseline_att_224x224_A_epoch_249_trt.pth.And my docker containers is : jetson-inference/docs/aux-docker.md at master · dusty-nv/jetson-inference · GitHub. (All other containers are based on it)

Could you give me an example of the convertion of the .pth to .onnx ?

Thanks.

Hi,

The model can be converted to the TensorRT engine with below GitHub:

Do you meet any issues when trying it?

Thanks

Hello.I am working on it (Trying to solve problem with a space on my device, first).

Hello.I used this method for the conversion: Conversion of model weights for human pose estimation model to ONNX results in nonsensical pose estimation - #11 by AastaLLL

I got the error:

It seems i cant do this operation on jetson, because it is not powerful enough.
Maybe i can just download the .onnx file somewhere ?

If i cant download it i will try to do that on my pc.

Hi,

Please try to generate the ONNX file with below command:

$ git clone https://github.com/NVIDIA-AI-IOT/trt_pose.git
$ sudo docker run -it --rm --runtime nvidia -v /home/nvidia/trt_pose:/home/nvidia/trt_pose --network host nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3
$ cd /home/nvidia/trt_pose/
$ sudo python3 setup.py install
$ cd trt_pose/utils/
$ ./export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth

Thanks.

Hi.Sorry for the late reply. I followed the instruction and created ONNX file.

Could you explain me, please, how can i convert my .onnx file to the engine in my container ?I have tensorrt, but, unfortunatelly, i still dont get, that exactly shuold i do to create the engine.

Thanks.

Hi,

Based on the log, you have converted the model to the ONNX successfully.
Then please try to deploy it with trtexec outside of the container.

$ /usr/src/tensorrt/bin/trtexec --onnx=[model] --saveEngine=[output]

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.