Confirming:
- Yes, the version of morpheus is 24.03.02
Python 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import morpheus
morpheus.version
‘24.03.02’
- Regarding these variables: C
onfig.model_max_batch_size
, Config.feature_length
- In our env file, we set the value of max_batch_size to 1024, but the .pbtxt file shows the default value of 524288 so this is not a match.
→ I can change the pbtxt file to show 1024 and test
- feature_length is not set in the env file, but in the code a default value is 32
Is there another place where I should check?
I do have this variable set: “./morpheus_pipeline/morpheus_pipeline_builder.py: force_convert_inputs=True, “
Where can I set: inout_mapping
?
The code gets stuck after adding the first stages
pipeline_builder.add_broadcast_stage(
stage_name=“broadcast-feature-engineering”,
branch_names=[ad_branch, nd_branch],
output_type=LeftShiftMessageMeta,
).add_stage(
stage=ADDataLoadingLeftShiftStage(pipeline_builder.config, suffix=suffix),
monitor_description=“AD Data Loading (Left shift) Throughput”,
stage_branch_name=ad_branch,
).add_stage(
stage=NDDataLoadingLeftShiftStage(pipeline_builder.config),
monitor_description=“ND Data Loading Throughput”,
stage_branch_name=nd_branch,
)
And this is the output I get on the terminal:
WARNING: Logging before InitGoogleLogging() is written to STDERR
attempting retry.
W20250521 16:14:24.913244 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:07, ? events/s]W20250521 16:14:24.913529 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:25.717754 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:09, ? events/s]W20250521 16:14:27.322055 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:27.322139 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:12, ? events/s]W20250521 16:14:30.526607 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:30.526727 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:16, ? events/s]W20250521 16:14:34.531513 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:34.531800 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:20, ? events/s]W20250521 16:14:38.535614 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:38.535822 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:24, ? events/s]W20250521 16:14:42.540000 41301 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
W20250521 16:14:42.540230 41300 inference_client_stage.cpp:255] Exception while processing message for InferenceClientStage, attempting retry.
Pipeline Throughput: 0 events [04:28, ? events/s]E20250521 16:14:46.546599 41300 runnable.hpp:112] /main/inference-5; rank: 0; size: 1; tid: 140479834293824 Unhandled exception occurred. Rethrowing
E20250521 16:14:46.546630 41300 context.cpp:124] /main/inference-5; rank: 0; size: 1; tid: 140479834293824: set_exception issued; issuing kill to current runnable.
Exception msg: RuntimeError: **Model is not ready**
I confirmed that we have >500 records of input data.
I confirmed that kubectl logs shows that the models are ready:
kubectl logs ai-engine-f599fbdd8-nv6nj -n csgdev -c morpheus-ai-engine | grep successfully
I0513 12:21:09.332542 1 model_lifecycle.cc:835] successfully loaded ‘ad_fi_1.0.0’
I0513 12:21:43.072444 1 model_lifecycle.cc:835] successfully loaded ‘nd_classifier_2.0.0’
I0513 12:21:58.483691 1 model_lifecycle.cc:835] successfully loaded ‘nd_embedder_2.0.0’
I0513 12:22:32.098439 1 model_lifecycle.cc:835] successfully loaded ‘nd_rf_severity_regressor_2.0.0’