Problem setting up NeMo-guardrails utterance flows with local LLM

Good morning/afternoon/night for those reading this

I’m trying to setup a local NeMo-guardrails enviroment. To start, I’m trying to run the ABC_v2 example locally. I changed the model to my local model:

models:
  - type: main
    engine: ollama
    model:  llama3.2:3b 
    base_url: http://localhost:11434

I’m trying to invoke a flow defined in /abc_v2/rails/disallowed.co by sending the exact same message written there.
Nevertheless, even thought I’m trying multiple models, I can’t get to make it work. I’m always receiveing the generic unallowed message “I’m sorry, I can’t respond to that.” instead of “I m sorry, but it s inappropriate and against my programming […]”.

Should I define an embedding model somewhere? Should I use other base_url? Is this option incompatible with ollama?

Please help me!
Thanks in advance

Hello. From what you have shared, I do not think you’ll need any embedding model here. Additionally, there is no incompatibility with ollama here. If your LLM is a chat model, then this is not an issue again.

Can you share your prompts.yml and disallowed.co files??