scrapy框架的日志等级和请求传参

一.scrapy的日志等级

- 在使用scrapy crawl spiderFileName程序运行时,在终端里打印输出的就是scrapy的日志信息.

- 日志种类:

  * ERROR : 一般错误

  * WARNING : 警告

  * INFO : 一般的信息

  * DEBUG : 调试信息

- 设置日志信息指定输出:

  * 在settings.py配置文件中,加入:

LOG_LEVEL = ‘指定日志信息种类’即可。

LOG_FILE = 'log.txt'则表示将日志信息写入到指定文件中进行存储。

 

二.请求传参

- 在某些情况下,我们爬取的数据不在同一个页面中,例如我们爬取一个电影网站,电影的名称,评分在一级页面,而爬取其他电影详情在其二级子页面中,这是我们就西药用到请求传参.

- 案例展示: 爬取www.id97.com电影网,将一级子页面中的电影名称,影片描述进行爬取.

爬虫文件:

# -*- coding: utf-8 -*-
import scrapy
from qiubai.items import QiubaiItem


class QiushiSpider(scrapy.Spider):
    name = 'qiushi'
    # allowed_domains = ['www.xxx.com']
    start_urls = ['https://www.4567tv.tv/frim/index1.html']

    def detail_parse(self, response):
        item = response.meta['item']
        describe = response.xpath('//p[@class="desc detail hidden-xs"]/span[2]/text()|//span[@class="detail-sketch"]/text()').extract_first()
        item['describe'] = describe
        yield item

    def parse(self, response):

        li_list = response.xpath('//ul[@class="stui-vodlist clearfix"]/li')
        for li in li_list:
            item = QiubaiItem()
            title = li.xpath('./div/a/@title').extract_first()
            detail_url = "https://www.4567tv.tv/" + li.xpath('./div/a/@href').extract_first()

            item["name"] = title

            yield scrapy.Request(url=detail_url, callback=self.detail_parse, meta={'item':item})

 items文件:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class QiubaiItem(scrapy.Item):
    # define the fields for your item here like:
    name = scrapy.Field()
    describe = scrapy.Field()

 管道文件:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class QiubaiPipeline(object):
    def process_item(self, item, spider):
        print(item)
        return item

 三.如何提高scrapy的爬取效率

- 增加并发

默认scrapy开启的并发线程为32个,可以适当进行增加,在settings配置文件中修改CONCURRENT_TEQUESTS = 100 值为100,并发设置成了100.

- 降低日志级别:

在运行scrapy是,会有大量日志信息的输出,为了减少CPU的使用率,可以设置log输出信息为INFO或者ERROR即可,在配置文件中写: LOG_LEVEL = 'INFO'

- 禁止cookie:

如果不是征得需要cookie,则在scrapy爬取数据是可以禁止cookie从而减少CPU的使用率.在配置文件中编写: COOKIES_ENABLED = False

- 禁止重试:

对是被的HTTP进行重新请求(重试)会减慢爬取速度,因此可以禁止重试.在配置文件中编写: RETRY_ENABLED = False

- 减少下载超时:

如果对一个非常慢的链接进行爬取,减少下载超时可以能让卡主的链接快速被放弃,从而提升效率.在配置文件中编写:DOWNLOAD_TIMEOUT = 10 超时时间为10秒.

 

测试案例:爬取校花网校花图片:

爬虫文件:

# -*- coding: utf-8 -*-
import scrapy
from xiaohua.items import XiaohuaItem

class XiahuaSpider(scrapy.Spider):

    name = 'xiaohua'
    allowed_domains = ['www.521609.com']
    start_urls = ['http://www.521609.com/daxuemeinv/']

    pageNum = 1
    url = 'http://www.521609.com/daxuemeinv/list8%d.html'

    def parse(self, response):
        li_list = response.xpath('//div[@class="index_img list_center"]/ul/li')
        for li in li_list:
            school = li.xpath('./a/img/@alt').extract_first()
            img_url = li.xpath('./a/img/@src').extract_first()

            item = XiaohuaItem()
            item['school'] = school
            item['img_url'] = 'http://www.521609.com' + img_url

            yield item

        if self.pageNum < 10:
            self.pageNum += 1
            url = format(self.url % self.pageNum)
            #print(url)
            yield scrapy.Request(url=url,callback=self.parse)

 item文件:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class XiaohuaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    school=scrapy.Field()
    img_url=scrapy.Field()

 管道文件:

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

import json
import os
import urllib.request
class XiaohuaPipeline(object):
    def __init__(self):
        self.fp = None

    def open_spider(self,spider):
        print('开始爬虫')
        self.fp = open('./xiaohua.txt','w')

    def download_img(self,item):
        url = item['img_url']
        fileName = item['school']+'.jpg'
        if not os.path.exists('./xiaohualib'):
            os.mkdir('./xiaohualib')
        filepath = os.path.join('./xiaohualib',fileName)
        urllib.request.urlretrieve(url,filepath)
        print(fileName+"下载成功")

    def process_item(self, item, spider):
        obj = dict(item)
        json_str = json.dumps(obj,ensure_ascii=False)
        self.fp.write(json_str+'\n')

        #下载图片
        self.download_img(item)
        return item

    def close_spider(self,spider):
        print('结束爬虫')
        self.fp.close()

 配置文件:

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 100
COOKIES_ENABLED = False
LOG_LEVEL = 'ERROR'
RETRY_ENABLED = False
DOWNLOAD_TIMEOUT = 3
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
DOWNLOAD_DELAY = 3

 

posted @ 2019-03-04 17:23  小白°  阅读(268)  评论(0编辑  收藏  举报