< 마켓플레이스로
🧩
registry

vllm-mlx

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

0.0 / 100

MCP 소개

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

패키지 정보

작성자registry
라이선스정보 없음
최근 업데이트
anthropicapple siliconaudio processingclaude codecomputer visionimage understandinginferencellmmachine learningmacosmllmmlxmultimodal aispeech to textstttext to speechttsvideo understandingvision language modelvllm