Engine file and calib.table not saved in DeepStream

What’s the Jetson module are you using, nano/xavier/nx?

If you have pre-generated serialized engine file for the model, you can specify it by “model-engine-file” either in deepstream_app config file or the corresponding pgie/sgie config file.

If you don’t have the pre-generated file, it will be generated in the same directory as your model file. But the model-file you set seems to be a pytorch format (wts) which is not supported. You can convert it to ONNX format (specified by onnx-file) or tensorrt engine (then use model-engine-file) in order to use it in DeepStream.

Can you search file with postfix “.engine” in your deepstream directory?
In your case there is no pre-generated engine file, and no valid model defined by model-file, but your program can still work successfully, I’m not sure if there is other engine file being used by your program, need you to double-check.