python_爬虫_代理服务器的设置

服务器代理爬取网站

免费代理网站:www.xicidaili.com,www.xsdaili.com,www.mayidaili.com/free,http://ip.yqie.com/ipproxy.htm

添加代理服务器的地址,代理服务器是哪种类型就要用相应的类型,比如https就要用https,速度快

变量1 = urlli.request.ProxyHandler({"协议" :"代理服务器:端口"})

(设置代理服务器的协议,端口给变量1)

变量2 = urllib.request.build_opener(变量1,urllib.request.HTTPHandler)

(将变量1设置的内容,和HTTPHandler作为一个容器给变量2)

urllib.request.install_opener(变量2)

(将变量2的内容添加为全局设置,这样urlopen也可以正常使用容器的内容)

前面总结:使用代理服务器加伪装浏览器加正则表达式爬取并机锋网首页里面网站的内容

import re
import urllib.request
def use_proxy(url):
    p1 = urllib.request.ProxyHandler({"https":"125.123.143.35:9000"})
    roq = urllib.request.build_opener(p1,urllib.request.HTTPHandler)
    headers = ("User-Agent","Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36")
    opener = urllib.request.build_opener()
    opener.addheaders = [headers]
    urllib.request.install_opener(opener)
    urllib.request.install_opener(roq)
    data = urllib.request.urlopen(url).read().decode("utf-8","ignore")
    return data
url = "http://bbs.gfan.com/"
data = use_proxy(url)
tj = 'href="http://bbs.gfan.com/forum-(.*?)-1.html"'
fun = re.compile(tj).findall(data)
try:
    for i in range(len(fun)):
        file = "F:/bing/a/" + str(i) +".html"
        url = "http://bbs.gfan.com/forum-" + str(fun[i]) + "-1.html"
        print("正在下载第%s个网页"%i)
        urllib.request.urlretrieve(url,file)
        print("第%s下载成功"%i)
except urllib.error.URLError as e:
    if hasattr(e,'code'):
        print(e.code)
    if hasattr(e,"reason"):
        print(e.reason)
View Code
posted @ 2019-07-15 17:28  Alom  阅读(300)  评论(0编辑  收藏  举报