Hi, all I tried to run some of the examples using my own configuration for OCI.
Directory layout for config/ is this:
.
├── config
│ ├── config.yml
│ ├── prompts.yml
│ └── rails.co
└── test.py
config.yml:
models:
- type: main
engine: oci_generative_ai
model: cohere.command-r-plus-08-2024
parameters:
model_id: cohere.command-r-plus-08-2024
service_endpoint: https://inference.generativeai.us-chicago-1.oci.oraclecloud.com
compartment_id: ocid1.compartment.oci1.XXXXXXXX
model_kwargs:
temperature: 0
max_tokens: 500 - type: content_safety
engine: oci_generative_ai
model: cohere.command-r-plus-08-2024
parameters:
model_id: cohere.command-r-plus-08-2024
service_endpoint: https://inference.generativeai.us-chicago-1.oci.oraclecloud.com
compartment_id: ocid1.compartment.oci1.XXXXXXXX
model_kwargs:
temperature: 0
max_tokens: 500
rails:
input:
flows:
- self check input
output:
flows:
- self check output
streaming: False
When am running
nemoguardrails chat --config ./config
if am prompting a
Hello!
I don’t get back the expected
Hello world! How are you doing
but
Hello again! It’s great to hear from you. Is there something specific you’d like to discuss or any further questions you have? I’m here to assist you in any way I can.
what am doing wrong ?