Python学习之Scrapy笔记

Requests模块

  1. 会话对象的使用 requests.Session()
    参考:https://wenku.baidu.com/view/1cad4d27cf1755270722192e453610661ed95a25.html

BeautifulSoup模块

xpath教程

参考:https://blog.csdn.net/m0_56863547/article/details/120420011
参考:https://www.jianshu.com/p/bad2e2c19fd1?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation

Scrapy教程

官方教程:https://www.osgeo.cn/scrapy/intro/tutorial.html
参考资料: https://baijiahao.baidu.com/s?id=1706598239137694865&wfr=spider&for=pc
参考资料:https://blog.csdn.net/qq_42071885/article/details/103117375
参考资料:http://www.runoob.com/w3cnote/scrapy-detail.html
参考资料:https://scrapy-chs.readthedocs.io/zh_CN/1.0/intro/tutorial.html
最新文档:https://docs.scrapy.org/en/latest/intro/overview.html

问题汇总

  1. 在spider中引入items.py失败
    参考:https://blog.csdn.net/qq_43650672/article/details/114870439
    from ..items import PoemItem
  2. pipelines中的类没有被引用
    参考:https://blog.csdn.net/chendongpu/article/details/124607594
ITEM_PIPELINES = {
   'mySpider.pipelines.MyspiderPipeline': 300,
}
  1. 创建爬虫启动文件start.py
from scrapy import cmdline
cmdline.execute("scrapy crawl baidu --nolog".split())
# --nolog表示不在控制台输出默认日志

Scrapy爬虫步骤

  1. 建立爬虫项目
    scrapy startproject mySpider
  2. 在项目目录下建立爬虫
    scrapy genspider baidu "www.baidu.com"
  3. 在items.py中定义爬取的字段
import scrapy
class MyspiderItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
  1. 在系统自动生成的爬虫模板中完善代码
import scrapy
from ..items import MyspiderItem

class BaiduSpider(scrapy.Spider):
    name = 'baidu'
    allowed_domains = ['www.baidu.com']
    start_urls = ['http://www.baidu.com/']

    def parse(self, response):
        item = MyspiderItem()
        title = response.xpath('/html/head/title/text()')
        item['title'] = title
        print(item)
        yield item
  1. 修改管道文件pipelines.py
# 配置
ITEM_PIPELINES = {
   'mySpider.pipelines.MyspiderPipeline': 300,
}
#输出
class MyspiderPipeline:
    def process_item(self, item, spider):
        print(item['title'])
        return item

Scrapy Shell的使用

参考:https://blog.csdn.net/jiduochou963/article/details/88363230

  1. 启用Scrpy Shell
    scrapy shell "www.baidu.com"
  2. 调试xpath和css表达式,使用内置的response对象
    response.xpath('/html/head/title/text()').extract()[0]

posted on 2022-05-12 19:53  朝朝暮Mu  阅读(41)  评论(0)    收藏  举报