Distributed scraping

Distributed scraping can be implemented in different ways depending on what the requirements of the scraping task are. Most of the time it’s enough to scale the network communication layer which can be easily achieved using proxies and Colly’s proxy switchers.

Proxy switchers

Using proxy switchers scraping still remains centralized while the HTTP requests are distributed among multiple proxies. Colly supports proxy switching via its’ SetProxyFunc() member. Any custom function can be passed to SetProxyFunc() with the signature of func(*http.Request) (*url.URL, error).


SSH servers can be used as socks5 proxies with the -D flag.

Colly has a built-in proxy switcher which rotates a list of proxies on every request.


package main

import (

func main() {
	c := colly.NewCollector()

	if p, err := proxy.RoundRobinProxySwitcher(
	); err == nil {
	// ...

Implementing custom proxy switcher:

var proxies []*url.URL = []*url.URL{
	&url.URL{Host: ""},
	&url.URL{Host: ""},

func randomProxySwitcher(_ *http.Request) (*url.URL, error) {
	return proxies[random.Intn(len(proxies))], nil

// ...

Distributed scrapers

To manage independent and distributed scrapers the best you can do is wrapping the scraper in a server. Server can be any kind of service like HTTP, TCP servers or Google App Engine. Use custom storage to achieve centralized and persistent cookie and visited url handling.


Colly has built-in Google App Engine support. Don't forget to call Collector.Appengine(*http.Request) if you use Colly from App Engine standard environment.

An example implementation can be found here.

Distributed storage

Visited URL and cookie data are stored in-memory by default. This is handy for short living scraper jobs, but it can be a serious limitation when dealing with large scale or long running crawling jobs.

Colly has the ability to replace the default in-memory storage with any storage backend which implements colly/storage.Storage interface. Check out existing storages.