What is the recommended way for deepstream to quickly draw information on the original frame and encode it as JPG?

My program was modified based on the deepstream-app example, and running on JetsonNX. Now I need to add a function to compress the original frame into JPG and send it over the network. Before encoding it as JPG, I need to draw object detection box, OSD information, area boundary box, etc on the original frame. In addition, my program needs to be as efficient as possible with JPG encoding. How should this feature be added, and can you recommend a general implementation?

It should be summed up in three steps:

  1. Retrieve the original frame from a data structure at some stage of Deepstream.
    ----> ( But, Which stage? Which element? In what way? )
  2. After getting the original frame, draw the information I need on it.
    ----> ( Using OpenCV, or what? )
  3. Encode the whole frame after drawing into JPG format. (The purpose of this step is to reduce the amount of data being sent over the network.)
    ----> ( What is the fastest way to encode? Is it possible to retrieve the encoded JPEG stream directly from Deepstream? )

My running environment information:
• Hardware Platform (Jetson NX)
• DeepStream Version : 5.0
• JetPack Version : 4.4.1

Duplicated with What is the best way to quickly draw information on the original frame and encode it as JPG? - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Well, mainly I don’t know which section would be a better place to put this topic

Could you please help me solve this problem? Thanks.