day03
回顾
-
简历模板:
- ConectionPool:
- 原因:
- 1.在短时间内向网站发起了一个高频的请求
- 使用代理
- 2.连接池(http)中的资源被耗尽
- 立即将请求断开:
- Connection:close
- 立即将请求断开:
- 1.在短时间内向网站发起了一个高频的请求
- 原因:
- ConectionPool:
-
高清图片:
- 图片懒加载:在img标签中应用了伪属性
-
回顾
- 数据解析的作用:为了实现聚焦爬虫
- bs4:
- soup.tagName
- find/find_all('tagName',attrName='value')
- select('Selector')
-
空格
-
- string/text
- tag['href']
- xpath:
- //tagName
- //tagName[@attrName="value"]
- //div[1]
- //text or /text
- //a/@href
- bs4和xpath最明显的一个区别什么?
- 解析出携带标签的局部内容?
- bs4相关标签定位的方法或者属性返回值就是携带标签的内容
- 代理
- cookie
- 验证码的识别
- 模拟登陆
- 代理
- 代理服务器:实现请求转发,从而可以实现更换请求的ip地址
- 在requests中如何将请求的ip进行更换
- 代理的匿名度:
- 透明:服务器知道你使用了代理并且知道你的真实ip
- 匿名:服务器知道你使用了代理,但是不知道你的真实ip
- 高匿:服务器不知道你使用了代理,更不知道你的真实ip
- 代理的类型:
- http:该类型的代理只可以转发http协议的请求
- https:只可以转发https协议的请求
- 免费代理ip的网站
- 快代理
- 西祠代理
- goubanjia
- 代理精灵(推荐):http://http.zhiliandaili.cn/
- 在爬虫中遇到ip被禁掉如何处理?
- 使用代理
- 构建一个代理池
- 拨号服务器
1 import requests 2 headers = { 3 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36' 4 } 5 url = 'https://www.baidu.com/s?wd=ip' 6 #proxies={'http/https':'ip:port'} 7 page_text = requests.get(url=url,headers=headers,proxies={'https':'1.197.203.187:9999'}).text 8 with open('ip.html','w',encoding='utf-8') as fp: 9 fp.write(page_text) 10 #基于代理精灵构建一个ip池 11 from lxml import etree 12 all_ips = [] #列表形式的代理池 13 proxy_url = 'http://t.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&groupid=0&qty=52&time=1&pro=&city=&port=1&format=html&ss=5&css=&dt=1&specialTxt=3&specialJson=&usertype=2' 14 proxy_page_text = requests.get(url=proxy_url,headers=headers).text 15 tree = etree.HTML(proxy_page_text) 16 proxy_list = tree.xpath('//body//text()') 17 for ip in proxy_list: 18 dic = {'https':ip} 19 all_ips.append(dic) 20 all_ips
[{'https': '113.221.45.28:10799'},
{'https': '117.67.74.168:11934'},
{'https': '220.177.81.239:15295'},
{'https': '59.62.167.86:29681'},
{'https': '116.249.27.17:38047'},
{'https': '119.5.224.137:36669'},
{'https': '36.56.149.135:30577'},
{'https': '116.28.54.195:35647'},
{'https': '124.112.104.185:20005'},
{'https': '119.142.181.118:18903'},
{'https': '124.64.245.234:24296'},
{'https': '119.85.15.189:22316'},
{'https': '182.38.126.21:41307'},
{'https': '125.123.121.135:18661'},
{'https': '1.29.109.86:17045'},
{'https': '123.188.196.22:38263'},
{'https': '182.88.228.9:12437'},
{'https': '27.40.125.28:39145'},
{'https': '171.8.52.214:42741'},
{'https': '175.147.96.125:30043'},
{'https': '121.9.199.251:19618'},
{'https': '163.179.199.82:42868'},
{'https': '183.164.247.233:25052'},
{'https': '117.67.126.150:11941'},
{'https': '120.87.32.208:10419'},
{'https': '114.105.216.212:37164'},
{'https': '106.59.35.66:17848'},
{'https': '114.224.220.53:42365'},
{'https': '114.225.62.70:35164'},
{'https': '122.194.245.106:32646'},
{'https': '183.52.128.105:17778'},
{'https': '116.208.94.195:41071'},
{'https': '124.112.105.0:42093'},
{'https': '122.247.81.103:15954'},
{'https': '116.54.210.142:19412'},
{'https': '112.122.249.110:12267'},
{'https': '221.199.195.197:17189'},
{'https': '114.239.118.178:14250'},
{'https': '125.123.125.179:16491'},
{'https': '114.97.208.34:29338'},
{'https': '218.64.196.114:24309'},
{'https': '112.253.58.102:39905'},
{'https': '112.194.91.97:37087'},
{'https': '122.194.249.66:41237'},
{'https': '171.211.5.167:49596'},
{'https': '117.42.203.4:20535'},
{'https': '180.118.86.3:44082'},
{'https': '113.85.46.70:19579'},
{'https': '106.5.141.193:32232'},
{'https': '122.7.231.4:26880'},
{'https': '218.19.169.196:28423'},
{'https': '112.117.112.182:10556'}]
1 import random 2 #爬取西祠代理中的免费代理ip 3 url = 'https://www.xicidaili.com/nn/%d' 4 free_proxies = [] 5 for page in range(1,30): 6 new_url = format(url%page) 7 page_text = requests.get(new_url,headers=headers,proxies=random.choice(all_ips)).text 8 tree = etree.HTML(page_text) 9 tr_list = tree.xpath('//*[@id="ip_list"]//tr')[1:]#xpath表达式中不可以出现tbody 10 for tr in tr_list: 11 ip = tr.xpath('./td[2]/text()')[0] 12 port = tr.xpath('./td[3]/text()')[0] 13 t_type = tr.xpath('./td[7]/text()')[0] 14 15 dic = { 16 'ip':ip, 17 'port':port, 18 'type':t_type 19 } 20 free_proxies.append(dic) 21 print('第{}页爬取完毕!!!'.format(page)) 22 print(len(free_proxies))
第1页爬取完毕!!! 第2页爬取完毕!!! 第3页爬取完毕!!! 第4页爬取完毕!!! 第5页爬取完毕!!! 第6页爬取完毕!!! 第7页爬取完毕!!! 第8页爬取完毕!!! 第9页爬取完毕!!! 第10页爬取完毕!!! 第11页爬取完毕!!! 第12页爬取完毕!!! 第13页爬取完毕!!! 第14页爬取完毕!!! 第15页爬取完毕!!! 第16页爬取完毕!!! 第17页爬取完毕!!! 第18页爬取完毕!!! 第19页爬取完毕!!! 第20页爬取完毕!!! 第21页爬取完毕!!! 第22页爬取完毕!!! 第23页爬取完毕!!! 第24页爬取完毕!!! 第25页爬取完毕!!! 第26页爬取完毕!!! 第27页爬取完毕!!! 第28页爬取完毕!!! 第29页爬取完毕!!! 2900
Cookie
- 作用:保存客户端的相关状态
- 爬取雪球网中的新闻资讯数据:https://xueqiu.com/
- 在请求中携带cookie,在爬虫中如果遇到了cookie的反爬如何处理?
- 手动处理
- 在抓包工具中捕获cookie,将其封装在headers中
- 应用场景:cookie没有有效时长且不是动态变化
- 自动处理
- 使用session机制
- 使用场景:动态变化的cookie
- session对象:该对象和requests模块用法几乎一致.如果在请求的过程中产生了cookie,如果该请求使用session发起的,则cookie会被自动存储到session中.
- 手动处理
1 #获取一个session对象 2 session = requests.Session() 3 main_url = 'https://xueqiu.com' #推测对该url发起请求会产生cookie 4 session.get(main_url,headers=headers) 5 6 url = 'https://xueqiu.com/v4/statuses/public_timeline_by_category.json' 7 params = { 8 'since_id': '-1', 9 'max_id': '20346152', 10 'count': '15', 11 'category': '-1', 12 } 13 page_text = session.get(url,headers=headers,params=params).json() 14 page_text
验证码识别
- 相关的线上打码平台识别
- 打码兔
- 云打码
- 超级鹰:http://www.chaojiying.com/about.html
- 1.注册,登录(用户中心的身份认证)
- 2.登录后:
- 创建一个软件:软件ID->生成一个软件id
- 下载示例代码:开发文档->python->下载
- 平台实例代码的演示
1 #!/usr/bin/env python 2 # coding:utf-8 3 4 import requests 5 from hashlib import md5 6 7 class Chaojiying_Client(object): 8 9 def __init__(self, username, password, soft_id): 10 self.username = username 11 password = password.encode('utf8') 12 self.password = md5(password).hexdigest() 13 self.soft_id = soft_id 14 self.base_params = { 15 'user': self.username, 16 'pass2': self.password, 17 'softid': self.soft_id, 18 } 19 self.headers = { 20 'Connection': 'Keep-Alive', 21 'User-Agent': 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)', 22 } 23 24 def PostPic(self, im, codetype): 25 """ 26 im: 图片字节 27 codetype: 题目类型 参考 http://www.chaojiying.com/price.html 28 """ 29 params = { 30 'codetype': codetype, 31 } 32 params.update(self.base_params) 33 files = {'userfile': ('ccc.jpg', im)} 34 r = requests.post('http://upload.chaojiying.net/Upload/Processing.php', data=params, files=files, headers=self.headers) 35 return r.json() 36 37 def ReportError(self, im_id): 38 """ 39 im_id:报错题目的图片ID 40 """ 41 params = { 42 'id': im_id, 43 } 44 params.update(self.base_params) 45 r = requests.post('http://upload.chaojiying.net/Upload/ReportError.php', data=params, headers=self.headers) 46 return r.json() 47 48 49 chaojiying = Chaojiying_Client('bobo328410948', 'bobo328410948', '899370') #用户中心>>软件ID 生成一个替换 96001 50 im = open('a.jpg', 'rb').read() #本地图片文件路径 来替换 a.jpg 有时WIN系统须要// 51 print(chaojiying.PostPic(im, 1004)['pic_str']) #1902 验证码类型 官方网站>>价格体系 3.4+版 print 后要加()
7261
1 def getCodeImgText(imgPath,img_type): 2 chaojiying = Chaojiying_Client('bobo328410948', 'bobo328410948', '899370') #用户中心>>软件ID 生成一个替换 96001 3 im = open(imgPath, 'rb').read() #本地图片文件路径 来替换 a.jpg 有时WIN系统须要// 4 return chaojiying.PostPic(im, img_type)['pic_str']
5 url = 'https://so.gushiwen.org/user/login.aspx?from=http://so.gushiwen.org/user/collect.aspx' 6 page_text = requests.get(url,headers=headers).text 7 tree = etree.HTML(page_text) 8 img_src = 'https://so.gushiwen.org'+tree.xpath('//*[@id="imgCode"]/@src')[0] 9 img_code_data = requests.get(img_src,headers=headers).content 10 with open('./gushiwen.jpg','wb') as fp: 11 fp.write(img_code_data) 12 img_text = getCodeImgText('./gushiwen.jpg',1004) 13 print(img_text)
T71W
- 为什么在爬虫中需要实现模拟登录?
- 有的数据是必须经过登录后才可以显示出来的!
- 涉及到的反爬:
- 验证码
- 动态请求参数:每次请求对应的请求参数都是动态变化
- 动态捕获:通常情况下,动态的请求参数都会被隐藏在前台页面的源码中
- cookie
1 def getCodeImgText(imgPath,img_type): 2 chaojiying = Chaojiying_Client('bobo328410948', 'bobo328410948', '899370') #用户中心>>软件ID 生成一个替换 96001 3 im = open(imgPath, 'rb').read() #本地图片文件路径 来替换 a.jpg 有时WIN系统须要// 4 return chaojiying.PostPic(im, img_type)['pic_str'] 5 6 #使用session捕获cookie 7 s = requests.Session() 8 first_url = 'https://so.gushiwen.org/user/login.aspx?from=http://so.gushiwen.org/user/collect.aspx' 9 s.get(first_url,headers=headers) 10 11 url = 'https://so.gushiwen.org/user/login.aspx?from=http://so.gushiwen.org/user/collect.aspx' 12 page_text = requests.get(url,headers=headers).text 13 tree = etree.HTML(page_text) 14 img_src = 'https://so.gushiwen.org'+tree.xpath('//*[@id="imgCode"]/@src')[0] 15 img_code_data = s.get(img_src,headers=headers).content 16 with open('./gushiwen.jpg','wb') as fp: 17 fp.write(img_code_data) 18 img_text = getCodeImgText('./gushiwen.jpg',1004) 19 print(img_text) 20 21 #动态捕获动态的请求参数 22 __VIEWSTATE = tree.xpath('//*[@id="__VIEWSTATE"]/@value')[0] 23 __VIEWSTATEGENERATOR = tree.xpath('//*[@id="__VIEWSTATEGENERATOR"]/@value')[0] 24 25 #点击登录按钮后发起请求的url:通过抓包工具捕获 26 login_url = 'https://so.gushiwen.org/user/login.aspx?from=http%3a%2f%2fso.gushiwen.org%2fuser%2fcollect.aspx' 27 data = { 28 '__VIEWSTATE': __VIEWSTATE, 29 '__VIEWSTATEGENERATOR': __VIEWSTATEGENERATOR, 30 'from': 'http://so.gushiwen.org/user/collect.aspx', 31 'email': 'www.zhangbowudi@qq.com', 32 'pwd': 'bobo328410948', 33 'code': img_text, 34 'denglu': '登录', 35 } 36 main_page_text = s.post(login_url,headers=headers,data=data).text 37 with open('main.html','w',encoding='utf-8') as fp: 38 fp.write(main_page_text)
a50d
1 url = 'https://www.qiushibaike.com/text/page/%d/' 2 urls = [] 3 for page in range(1,11): 4 new_url = format(url%page) 5 urls.append(new_url) 6 urls 7 ['https://www.qiushibaike.com/text/page/1/', 8 'https://www.qiushibaike.com/text/page/2/', 9 'https://www.qiushibaike.com/text/page/3/', 10 'https://www.qiushibaike.com/text/page/4/', 11 'https://www.qiushibaike.com/text/page/5/', 12 'https://www.qiushibaike.com/text/page/6/', 13 'https://www.qiushibaike.com/text/page/7/', 14 'https://www.qiushibaike.com/text/page/8/', 15 'https://www.qiushibaike.com/text/page/9/', 16 'https://www.qiushibaike.com/text/page/10/'] 17 def get_request(url): #必须有一个参数 18 return requests.get(url,headers=headers).text 19 from multiprocessing.dummy import Pool 20 pool = Pool(10) 21 response_text_list = pool.map(get_request,urls) #使用自定义的函数func异步的处理urls列表中的每一个列表元素 22 print(response_text_list)

浙公网安备 33010602011771号