Support for medium Qwen 3.5 models

I’m specifically suggesting implementation for the 3.5-35b-a3b and 27b models, which both are VL models and are extremely capable at their size. The current large model is powerful but clunky for most functions.