关键词 "request" 的搜索结果, 共 14 条, 只显示前 480 条
Claude MCP Server for Github with Linear integration
MCP server implementation for handling run_python requests
MCP (Minimal Command Protocol) server that allows users to search for documentation from popular libraries such as LangChain, LlamaIndex, and OpenAI using the Serper API. The server fetches search res
The Indian Railways MCP Server provides live station status and train information using the Model Context Protocol (MCP). This server is designed to handle requests for live data from Indian Railways.
This project provides an HTTP server for image generation using Stable Diffusion, along with a Model Context Protocol (MCP) server that enables AI agents to request image generation.
A lightweight TypeScript middleware for MCP SDK servers that delivers analytics. Captures request metrics, performance data, and usage patterns with minimal overhead. Features real-time monitoring, co
This project is a Model Context Protocol (MCP) Server built using Node.js + Express.js that interacts with the OpenWeather API. It allows users to fetch air pollution data based on latitude & longitud
An MCP (Model Context Protocol) server for data transformation and BI charts will allow AI assistants to connect to your data sources, transform data, and generate high-quality visualizations through
This project implements a Python-based MCP (Model Context Protocol) server that acts as an interface between Large Language Models (LLMs) and the Google Calendar API. It enables LLMs to perform calend
It consistently responds with "Ranger!" to any MCP tool request it receives via standard input/output.
Exposes MinIO data through Resources. The server can access and provide: Text files (automatically detected based on file extension) Binary files (handled as application/octet-stream)
🚀🤖 Crawl4AI:开源 LLM 友好型网络爬虫和抓取工具。 Crawl4AI 是 GitHub 上排名第一的热门代码库,由充满活力的社区积极维护。它提供速度超快、AI 就绪的 Web 爬取功能,专为 LLM、AI 代理和数据管道量身定制。Crawl4AI 开源、灵活,专为实时性能而构建,为开发者提供无与伦比的速度、精度和部署便捷性。 ✨ 查看最新更新 v0.6.0 🎉 0.6.
MiniMax-M1是MiniMax团队最新推出的开源推理模型,基于混合专家架构(MoE)与闪电注意力机制(lightning attention)相结合,总参数量达 4560 亿,每个token激活 459 亿参数。模型超过国内的闭源模型,接近海外的最领先模型,具有业内最高的性价比。MiniMax-M1原生支持 100 万token的上下文长度,提供40 和80K两种推理预算版本,适合处理长输入
只显示前20页数据,更多请搜索
Showing 97 to 110 of 110 results