scrapy使用七:防ban

scrapy官方文档:https://doc.scrapinghub.com/index.html

根据scrapy官方文档:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban,主要有以下几个策略。

一、禁用cookies、动态设置user agent、代理IP和VPN等一系列的措施组合来防止爬虫被ban

1、创建middlewares.py:自定义代理ip和user_agent中间件

scrapy代理IP、user agent的切换都是通过DOWNLOADER_MIDDLEWARES进行控制。示例:

[root@bogon cnblogs]# vi cnblogs/middlewares.py

import random
import base64
from settings import PROXIES

class RandomUserAgent(object):
    """Randomly rotate user agents based on a list of predefined ones"""

    def __init__(self, agents):
        self.agents = agents

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings.getlist('USER_AGENTS'))

    def process_request(self, request, spider):
        #print "**************************" + random.choice(self.agents)
        request.headers.setdefault('User-Agent', random.choice(self.agents))

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_pass'] is not None:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
            encoded_user_pass = base64.encodestring(proxy['user_pass'])
            request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
            print "**************ProxyMiddleware have pass************" + proxy['ip_port']
        else:
            print "**************ProxyMiddleware no pass************" + proxy['ip_port']
            request.meta['proxy'] = "http://%s" % proxy['ip_port']

 类RandomUserAgent主要用来动态获取user agent,user agent列表USER_AGENTS在settings.py中进行配置。

  类ProxyMiddleware用来切换代理,proxy列表PROXIES也是在settings.py中进行配置。

2、修改settings.py配置USER_AGENTS和PROXIES,以及禁用cookie,设置下载延迟,设置downloader_middleware

  a):添加USER_AGENTS

USER_AGENTS = [
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
    "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
    "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
    "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
    "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
    "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
    "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
    "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]

或者使用开源项目,实现实现User-Agent的动态切换和管理: https://github.com/hellysmile/fake-useragent

fake-useragent维护不同User-Agent的字段值:https://fake-useragent.herokuapp.com/browsers/0.1.8

b):添加代理IP设置PROXIES

PROXIES = [
    {'ip_port': '111.11.228.75:80', 'user_pass': ''},
    {'ip_port': '120.198.243.22:80', 'user_pass': ''},
    {'ip_port': '111.8.60.9:8123', 'user_pass': ''},
    {'ip_port': '101.71.27.120:80', 'user_pass': ''},
    {'ip_port': '122.96.59.104:80', 'user_pass': ''},
    {'ip_port': '122.224.249.122:8088', 'user_pass': ''},
]

代理IP可以网上搜索一下,上面的代理IP获取自:http://www.xici.net.co/。

 c):禁用cookies

COOKIES_ENABLED=False

d):设置下载延迟

DOWNLOAD_DELAY=3

e):最后设置DOWNLOADER_MIDDLEWARES

DOWNLOADER_MIDDLEWARES = {
#    'cnblogs.middlewares.MyCustomDownloaderMiddleware': 543,
    'cnblogs.middlewares.RandomUserAgent': 1,
    'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
    #'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
    'cnblogs.middlewares.ProxyMiddleware': 100,
}

 

二、crawlera:代理ip地址池

crawlera是一个利用代理IP地址池来做分布式下载的第三方平台,除了scrapy可以用以外,普通的java、php、python等都可以通过curl的方式来调用

 crawlera官方网址:http://scrapinghub.com/crawlera/

 crawlera帮助文档:http://doc.scrapinghub.com/crawlera.html

(一)、注册crawlera账号,获取crawlera API KEY

  1、注册一个crawlera账号并激活

  https://dash.scrapinghub.com/account/signup/

  

  填写好用户名,邮件和密码点击sign up即完成注册,收到注册确认邮件确认即可。

  2、创建一个Organizations

  

  3、创建完Organizations后添加crawlera user

    

  

  4、查看API key

  

  点击crawlera user的名称jack就可以查看API的详细信息了(key)

  

 

  至此,crawlera API的信息已经获取到了。

二)、修改scrapy项目

  下面看看怎么添加到scrapy项目

  1、安装scrapy-crawlera

pip install scrapy-crawlera

  2、修改settings.py

  DOWNLOADER_MIDDLEWARES下添加配置项

'scrapy_crawlera.CrawleraMiddleware': 600 

  其他配置项

CRAWLERA_ENABLED = True
CRAWLERA_USER = '<API key>'
CRAWLERA_PASS = '你crawlera账号的密码'

  注意:由于之前的项目用了自定义代理的方式,因此DOWNLOADER_MIDDLEWARES下的

#'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110, #代理需要用到
#'cnblogs.middlewares.ProxyMiddleware': 100, #代理需要用到

  这两个配置项要注释掉。

  3、测试crawlera的抓取是否可用

scrapy crawl CnblogsSpider

  4、查看结果

  

  这里可以看到crawlera已经正常工作了。

  5、另外crawlera官网也可以查看抓取结果

  

  scrapy运用crawlera进行抓取就介绍到这里。另外crawlera还提供付费定制服务,如果经费充足也可以考虑付费定制scrapy的爬虫。

  代码更新至此:https://github.com/jackgitgz/CnblogsSpider(提交到github的代码将api和password去掉了,如果想运行需要添加自己的key和password)

 

三、Google cache:需要墙

 

四、非爬虫,如 requests使用代理地址池

  如果你不是scrapy爬虫,而仅仅是想python调用,crawlera也提供了python直接调用的方法

  1、通过request的方式

import requests

url = "http://twitter.com"
proxy = "paygo.crawlera.com:8010"
proxy_auth = "<API KEY>:"

proxies = {
    "http": "http://{0}@{1}/".format(proxy_auth, proxy)
}

headers = {
    "X-Crawlera-Use-HTTPS": 1
}

r = requests.get(url, proxies=proxies, headers=headers)

print("""
Requesting [{}]
through proxy [{}]

Response Time: {}
Response Code: {}
Response Headers:
{}

Response Body:
{}
""".format(url, proxy, r.elapsed.total_seconds(), r.status_code, r.headers, r.text))

2、request代理重写url

import requests
from requests.auth import HTTPProxyAuth

url = "https://twitter.com"
headers = {}
proxy_host = "paygo.crawlera.com"
proxy_auth = HTTPProxyAuth("<API KEY>", "")
proxies = {"http": "http://{}:8010/".format(proxy_host)}

if url.startswith("https:"):
    url = "http://" + url[8:]
    headers["X-Crawlera-Use-HTTPS"] = "1"

r = requests.get(url, headers=headers, proxies=proxies, auth=proxy_auth)

print("""
Requesting [{}]
through proxy [{}]

Response Time: {}
Response Code: {}
Response Headers:
{}

Response Body:
{}
""".format(url, proxy_host, r.elapsed.total_seconds(), r.status_code, 
           r.headers, r.text))

 

五、降低被识别为爬虫的方法

3种常见的方法

1. 在settings中配置禁用cookies
1 COOKIES_ENABLED = False

2. scrapy限速处理,scrapy为我们提供了扩展模块,它能动态的限制下载速度
复制代码
# http://scrapy-chs.readthedocs.io/zh_CN/latest/topics/autothrottle.html

# 在settings中配置的参数

AUTOTHROTTLE_ENABLED = True  # 启用AutoThrottle扩展

AUTOTHROTTLE_START_DELAY = 5.0  # 初始下载延迟(单位:秒)

AUTOTHROTTLE_MAX_DELAY = 60.0  # 在高延迟情况下最大的下载延迟(单位秒)

AUTOTHROTTLE_DEBUG = True  # 起用AutoThrottle调试(debug)模式,展示每个接收到的response。 您可以通过此来查看限速参数是如何实时被调整的
复制代码

 

3. 在不同的spider中,配置不同的settings参数
复制代码
# 例如有些网站不需要cookie,有些网站必须使用cookie

# 在我们的spider类中,配置我们的类变量custom_settings = {}

class TestSpider(scrapy.Spider):
    
    custom_settings = {
        "COOKIES_ENABLED": True,
        "AUTOTHROTTLE_ENABLED": True,
    }
复制代码

 

转自:https://www.cnblogs.com/rwxwsblog/p/4582127.html

 

posted on 2018-10-25 16:14  myworldworld  阅读(308)  评论(0)    收藏  举报

导航