🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
-
Updated
Nov 15, 2024 - TypeScript
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.
Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
新一代爬虫平台,以图形化方式定义爬虫流程,不写代码即可完成爬虫。
A collection of awesome web crawler,spider in different languages
Ingest, parse, and optimize any data format ➡️ from documents to multimedia ➡️ for enhanced compatibility with GenAI frameworks
Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with BeautifulSoup, Playwright, and raw HTTP. Both headful and headless mode. With proxy rotation.
Cross Platform C# web crawler framework built for speed and flexibility. Please star this project! +1.
简单易用的Python爬虫框架,QQ交流群:597510560
The Ultimate Information Gathering Toolkit
A web crawler and scraper for Rust
Internet search engine for text-oriented websites. Indexing the small, old and weird web.
A scalable, mature and versatile web crawler based on Apache Storm
A versatile Ruby web spidering library that can spider a site, multiple domains, certain links or infinitely. Spidr is designed to be fast and easy to use.
Automate webpages at scale, scrape web data completely and accurately with high performance, distributed AI-RPA.
Run a high-fidelity browser-based web archiving crawler in a single Docker container
CLI tool for saving a faithful copy of a complete web page in a single HTML file (based on SingleFile)
ACHE is a web crawler for domain-specific search.
Add a description, image, and links to the web-crawler topic page so that developers can more easily learn about it.
To associate your repository with the web-crawler topic, visit your repo's landing page and select "manage topics."