Python程序自动刷网站访问量

 1 import requests
 2 import time
 3 import random
 4 url = ['http://cq.srx123.com/',
 5        'http://cq.srx123.com/article.php',
 6        'http://cq.srx123.com/yszc.php?act=k',
 7        'http://cq.srx123.com/download.php']
 8 
 9 head = ['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36',
10         'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0']
11 
12 # DaiLi = ['58.218.92.68:2360','58.218.92.72:6156','58.218.92.78:5424',
13 #'58.218.92.74:4716','58.218.92.74:9387','58.218.92.78:2863','58.218.92.68:8890','58.218.92.77:2867','58.218.92.77:8749',
14 #'58.218.92.73:7463','58.218.92.78:3749','58.218.92.68:9321','58.218.92.75:4647','58.218.92.73:6601','58.218.92.74:4077',
15 # '58.218.92.69:4815','58.218.92.68:3761','58.218.92.78:3447']
16 
17 
18 
19 ShuLiang =1
20 for i in range(len(url)):
21     for Tou in range(len(head)):
22         headers = {"User-Agent": head[Tou]}//构建请求头
23 
24         # for Dai in range(len(DaiLi)):
25         # proxies = {"http": "http://" + DaiLi[Dai]}//构建代理ip格式
26         response = requests.get(url[i], headers=headers, timeout=10)//如果使用ip代理,get方法内需要添加对应参数
27         if response.status_code == 200:
28             print('' + str(ShuLiang), '次访问成功,使用代理:' )
29             ShuLiang += 1
30             DengDai = random.randint(0, 99)
31             print(DengDai)
32             time.sleep(DengDai)
33         else:
34             print("访问失败")
35             ShuLiang += 1

最近学校因为专业课的问题,老师给我们布置了一个网站运营的作业,考核标准就是网站的访问量。所以我便用Python写了这样一个程序(部分代码)

程序能用但是还是存在一些问题,比如访问过快的话会被服务器当作是DDOS攻击屏蔽掉,或者说访问速度过快被统计端屏蔽掉等等。

个人建议:使用时应当注意目标网站是否允许网络爬虫的访问,还有就是应当注意网络爬虫使用的道德规范,可以通过查看目标网站对爬虫的限制来进行特定的访问爬取

posted @ 2020-05-05 19:00  Jack船长1  阅读(976)  评论(0编辑  收藏  举报