Torchscripted Pytorch Lightning Module Fails to Load

I’m attempting to launch a triton server instance with a torchscripted module. The module was trained with assistance from pytorch lightning so has many internal variables which I think is causing the below error. The torchscripted module runs without issue outside of triton. Any ideas if this is recoverable or if the torchscripted module internals will have to be modified?

I0616 17:49:31.589088 1 libtorch.cc:1004] TRITONBACKEND_ModelFinalize: delete model state
E0616 17:49:31.589125 1 model_repository_manager.cc:1213] failed to load 'model' version 1: Internal: failed to load model 'model': 
Unknown type name 'NoneType':
Serialized   File "code/__torch__/model.py", line 8
  _dtype : int
  _device : Device
  trainer : NoneType
            ~~~~~~~~ <--- HERE
  _distrib_type : NoneType
  _device_type : NoneType

I0616 17:49:31.589274 1 server.cc:504] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0616 17:49:31.589390 1 server.cc:543] 
+-------------+-----------------------------------------------------------------+--------+
| Backend     | Path                                                            | Config |
+-------------+-----------------------------------------------------------------+--------+
| tensorrt    | <built-in>                                                      | {}     |
| pytorch     | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so         | {}     |
| tensorflow  | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so | {}     |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {}     |
| openvino    | /opt/tritonserver/backends/openvino/libtriton_openvino.so       | {}     |
+-------------+-----------------------------------------------------------------+--------+

I0616 17:49:31.589476 1 server.cc:586] 
+-------+---------+---------------------------------------------------------------------------------------------+
| Model | Version | Status                                                                                      |
+-------+---------+---------------------------------------------------------------------------------------------+
| ftoi  | 1       | UNAVAILABLE: Internal: failed to load model 'model':                                         |
|       |         | Unknown type name 'NoneType':                                                               |
|       |         | Serialized   File "code/__torch__/model.py", line 8 |
|       |         |   _dtype : int                                                                              |
|       |         |   _device : Device                                                                          |
|       |         |   trainer : NoneType                                                                        |
|       |         |             ~~~~~~~~ <--- HERE                                                              |
|       |         |   _distrib_type : NoneType                                                                  |
|       |         |   _device_type : NoneType                                                                   |
+-------+---------+---------------------------------------------------------------------------------------------+

For future travelers, this was solved by deleting all the None attributes in the torch module prior to running torch.jit.script.

    model.eval()
    remove_attributes = []
    for key, value in vars(model).items():
        if value is None:
            remove_attributes.append(key)

    for key in remove_attributes:
        delattr(model, key)