How do we write business logic on top of the model trained with TLT?

I have trained SSD using the Transfer Learning Toolkit on Ubuntu 20.0.4 via the docker container. I have two questions,

  1. The development environment is in Ubuntu 20.0.4. In the Jupyter Notebook, I’m able to infer the images using the following script,

    !ssd inference --gpu_index=$GPU_INDEX
    -m $USER_EXPERIMENT_DIR/export/trt.engine
    -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt
    -i /workspace/examples/ssd/test_samples
    -o $USER_EXPERIMENT_DIR/ssd_infer_images
    -t 0.4

    Now, How do I write business logic on top of it by feeding images from the OpenCV? Where is this file that has got the inference code?

  2. In Jetson, I have used Deepstream to deploy. But I couldn’t find a simple implementation that shows how I could deploy the custom model on the Jetson using Python API? I have both .tlt,.etlt,.engine files but the documentation is extremely worse and scattered so it’s hard to understand. And how do I see the output as we would with OpenCV using imshow?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please provide the info as @kayccc mentioned