Fast and Elegant Scraping Framework for Gophers

GitHub

Colly provides a clean interface to write any kind of crawler/scraper/spider

With Colly you can easily extract structured data from websites, which can be used for a wide range of applications, like data mining, data processing or archiving.

Features

  • Clean API
  • Fast (>1k request/sec on a single core)
  • Manages request delays and maximum concurrency per domain
  • Automatic cookie and session handling
  • Sync/async/parallel scraping
  • Distributed scraping
  • Caching
  • Automatic encoding of non-unicode responses
  • Robots.txt support
  • Google App Engine support
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
func main() {
	c := colly.NewCollector()

	// Find and visit all links
	c.OnHTML("a", func(e *colly.HTMLElement) {
		e.Request.Visit(e.Attr("href"))
	})

	c.OnRequest(func(r *colly.Request) {
		fmt.Println("Visiting", r.URL)
	})

	c.Visit("http://go-colly.org/")
}

Batteries included

Colly comes with all the tools you need for scraping.

Examples

Business ready

Free for commercial use.

License

Open Source

Development of Colly is community driven and public.

Project

Sponsors

Scrapfly is an enterprise-grade solution providing Web Scraping API that aims to simplify the scraping process by managing everything: real browser rendering, rotating proxies, and fingerprints (TLS, HTTP, browser) to bypass all major anti-bots. Scrapfly also unlocks observability by providing an analytical dashboard and measuring the success rate/block rate in detail.