How to implement the functionality automation process of Audio2Gesture?

Hey, I want use the python implement the automaton with audio2Gesture in console of Windows System.
But how can I do it?
I try to exec "kit.exe -exec “my python script” int console, but it can’t work…the log info is that

2023-07-20 07:49:43 [140ms] [Info] [] python GC: gc.disable()
2023-07-20 07:49:43 [140ms] [Info] [carb] Plugin carb.scripting-python.plugin is already a dependency of; not changing unload order
2023-07-20 07:49:43 [140ms] [Info] [] No run loop was found, quiting…
2023-07-20 07:49:43 [140ms] [Info] [] Application auto-quits as it worked for the specified number of frames: 0
2023-07-20 07:49:43 [141ms] [Info] [] app started
2023-07-20 07:49:43 [141ms] [Info] [carb] Initializing plugin: carb.threadtime-tracker.plugin (interfaces: [carb::threadtimetracker::IThreadTimeTracker v1.0]) (impl: carb.threadtime-tracker.plugin)

My purpose is that I have a audio file and I can execute my script to produce usda from audio2Gesture, rather than use the Machinima App.

I need your help,plz.