How to modify app_base_inference docker image files

I would like to make some edits to the app_base_inference files, specifically the classification_result_writer.py file. I changed the files in a running container instance and committed that container as a new image with a new name. I changed the app Dockerfile to reference the new image. However, when running the run_docker.sh file I get sent right to the root of the new app_base_inference container and cannot execute any commands. The process does not execute like it does with the original image after the TRTIS docker spins up. I have to manually shut down the docker container and delete the docker network that is created by the bash script. It all works as it should once I switch the reference back to the original image. I don’t have the original files to recreate the docker app_base_inference Docker image - at least I am not aware of any. What should I do if I want to change the way the app_base_inference operates? Thanks!

I was able to rebuild the Docker image using the Dockerfile that is part of the original app_base_inference image after tracking down the docker pull command for the older clara base release FROM nvcr.io/nvidia/clara/python-base:0.5.0-2004.7 - I was able to change the classification_writer.py to meet my requirements, but I see that a lot of the the clara files are in .pyc format. Are there precompiled/pre-assembled .py formats of these files available anywhere? For instance, medical.tlt2.src.components.inferers.inferer

P.S. I was able to disassemble using https://github.com/zrax/pycdc

Thanks again for the question.
Please note the compiled Python files are from the Clara AI transforms and inference modules/library, and the specific version used in the AI app_base_inference is not open sourced.

Great that have succeeded in substituting in your own version of the classification writer. The inference configuration file and the parser/component builder supports custom implementation, through the user of path instead of name so that you can customize the execution with your own versions of transforms and writers.

This also remind us at Clara Deploy team to create better and additional guides and instructions on how to plug in custom implementation, or custom inference application altogether.

Thanks for the feedback.