< 마켓플레이스로
registry
vllm-mlx
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.
MCP 소개
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.
패키지 정보
anthropicapple siliconaudio processingclaude codecomputer visionimage understandinginferencellmmachine learningmacosmllmmlxmultimodal aispeech to textstttext to speechttsvideo understandingvision language modelvllm