1. 作用: 批量拦截请求和响应
2. 拦截请求
1) 篡改请求头信息(User-Agent)
2) 设置相关请求对象的代理ip (process_exception)中
3. 拦截响应
在下载器完成将Response传递给引擎中,下载中间件可以对响应进行一系列处理。比如进行gzip解压等。
下载中间件的使用
class MiddleproDownloaderMiddleware(object):
user_agent_list = [
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
# 可被选用的代理IP
PROXY_http = [
'120.234.63.196:3128',
'195.208.131.189:56055',
]
PROXY_https = [
'120.234.63.196:3128',
'195.208.131.189:56055',
]
@classmethod
def from_crawler(cls, crawler): # 实例化一个下载中间件的对象 默认调用
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
# 处理拦截到所有非异常的请求 request: 请求对象, spider: 爬虫文件中爬虫类实例化的对象
def process_request(self, request, spider):
print('中间件 process_request()')
request.headers['User-Agent'] = random.choice(self.user_agent_list)
return None
# 拦截所有的响应, 修改响应数据, 篡改响应数据.
def process_response(self, request, response, spider):
return response
# 拦截是发生异常的请求对象, 注意返回request
def process_exception(self, request, exception, spider):
if request.url.split(':')[0] == 'https':
request.meta['proxy'] = 'https://' + random.choice(self.PROXY_https)
else:
request.meta['proxy'] = 'http://' + random.choice(self.PROXY_http)
return request
# 打印相关的日志
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
settings 配置
# 543 为优先级
DOWNLOADER_MIDDLEWARES = {
'middlePro.middlewares.MiddleproDownloaderMiddleware': 543,
}
User-Agent池
1. 作用:尽可能多的将scrapy工程中的请求伪装成不同类型的浏览器身份。
2. 操作流程:
- 1.在下载中间件中拦截请求
- 2.将拦截到的请求的请求头信息中的UA进行篡改伪装
- 3.在配置文件中开启下载中间件
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 "
"(KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 "
"(KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 "
"(KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 "
"(KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 "
"(KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 "
"(KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
代理池
1. 作用:尽可能多的将scrapy工程中的请求的IP设置成不同的。
2. 操作流程:
- 1.在下载中间件中拦截请求
- 2.将拦截到的请求的IP修改成某一代理IP
- 3.在配置文件中开启下载中间件
PROXY_http = [
'153.180.102.104:80',
'195.208.131.189:56055',
]
PROXY_https = [
'120.83.49.90:9000',
'95.189.112.214:35508',
]