Deploy Tensorflow in TensorRT

I have Tensorflow model for HumanPose estimation https://github.com/ildoonet/tf-pose-estimation.

Tensorflow process has two parts, one is Humanbody inference using CMU model https://github.com/ildoonet/tf-pose-estimation/tree/master/models/graph and post processing using some tensorflow layers as follows.

self.tensor_image = self.graph.get_tensor_by_name('TfPoseEstimator/image:0')
        self.tensor_output = self.graph.get_tensor_by_name('TfPoseEstimator/Openpose/concat_stage7:0')
        self.tensor_heatMat = self.tensor_output[:, :, :, :19]
        self.tensor_pafMat = self.tensor_output[:, :, :, 19:]
        self.upsample_size = tf.placeholder(dtype=tf.int32, shape=(2,), name='upsample_size')
        self.tensor_heatMat_up = tf.image.resize_area(self.tensor_output[:, :, :, :19], self.upsample_size,
                                                      align_corners=False, name='upsample_heatmat')
        self.tensor_pafMat_up = tf.image.resize_area(self.tensor_output[:, :, :, 19:], self.upsample_size,
align_corners=False, name='upsample_pafmat')

I can convert CMU model to TensorRT Int8 eigine. I like to process post processing part in TensorRT Int8 format.

So my query is

(1)How can I add those post processing tensorflow layers to CMU’s TensorRT conversion? In UFF or in Graphsurgeon whichever is possible?

(2)Or create TensorRT network for those layers and make another TensorRT engine and run one after another?
Which way is better?