关键词 "scrape captions" 的搜索结果, 共 21 条, 只显示前 480 条
A lightweight prototype demonstrating how to integrate an LLM (via OpenAI) with a Model Context Protocol (MCP) server to extract real-time weather data by scraping and processing open web content usin
originally was going to be an mcp server, now it's a stupid soundcloud scraper
MCP server for using ScrAPI to scrape web pages.
An MCP server that scrapes websites, indexes content into Qdrant, and provides a query tool.
An MCP server for (free!) search results via duckduckgo
MCP Server leveraging crawl4ai for web scraping and LLM-based content extraction (Markdown, text snippets, smart extraction). Designed for AI agent integration.
This is an MCP server that provides tools to LLMs for searching and analyzing apps from both Google Play Store and Apple App Store – perfect for ASO.
This MCP server uses the Fresh LinkedIn Profile Data API to fetch LinkedIn profile information. It is implemented as a model context protocol (MCP) server and exposes a single tool, get_profile, which
Unblock, scrape, and search tools for MCP clients
The Torobjo MCP Server is a powerful implementation of the Model Context Protocol (MCP) for product search and Instagram analysis. It integrates with the Torob API for product searches and extracts ca
Scrapeless Mcp Server
MCP server that extracts text content from webpages, YouTube videos, and PDFs for LLMs to use.
I scraped a lot of information on MCP (Model Context Protocol) servers . with Integration to Cursor AI and Claude Desktop . That way you can add this folder to your preferred IDE so that it will have
A MCP Server for pdffigures2: This server processes scholarly PDFs to extract figures, tables, captions, and section titles with high accuracy. It is designed to support researchers and developers in
A MCP Server for paperscraper:Tools to scrape publication metadata from pubmed, arxiv, medrxiv, biorxiv and chemrxiv.
🚀 OneSearch MCP Server: Web Search & Scraper & Extract, Support Firecrawl, SearXNG, Tavily, DuckDuckGo, Bing, etc.
Scrape any developer documentation and save it locally as a markdown file using anthropic's MCP to standardize communication between the cli and the documentation server
Serper MCP Server supporting search and webpage scraping
MCP Server for skrape.ai, lets you input any URL and it returns clean markdown for the LLM
ScrapeGraphAI 是基于大型语言模型(LLM)驱动的智能网络爬虫工具包,专注于从各类网站和HTML内容中高效提取结构化数据。具备三大核心功能:SmartScraper可根据用户提示精准抓取网页中的结构化信息;SearchScraper基于AI驱动的搜索技术从搜索引擎结果中提取关键信息;Markdownify可将网页内容快速转换为整洁的Markdown格式,方便后续处理和存储。 Sc
只显示前20页数据,更多请搜索
Showing 121 to 141 of 141 results