Python爬取中国大学排名网站内容

爬取地址:

http://www.zuihaodaxue.com/zuihaodaxuepaiming2018.html‪‬‪‬‪‬‪‬‪‬‮‬‫‬‪‬‪‬‪‬‪‬‪‬‪‬‮‬‭‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬

这个网址打开自动跳转到主页,故替换为:

https://www.shanghairanking.cn/rankings/bcur/201611‪‬‪‬‪‬‪‬‪‬‮‬‫‬‪‬‪‬‪‬‪‬‪‬‪‬‮‬‭‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬


‪‬‪‬‪‬‪‬‪‬‮‬‫‬‪‬‪‬‪‬‪‬‪‬‪‬‮‬‭‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬

要求:

1. 爬取大学排名(学号尾号3,4,爬取年费2016)
‪‬‪‬‪‬‪‬‪‬‮‬‫‬‪‬‪‬‪‬‪‬‪‬‪‬‮‬‭‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‫‬‭‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬‫‬‪‬‪‬‪‬‪‬‪‬‮‬‪‬2. 把爬取得数据,存为csv文件

 

代码:

 

import csv
import os
import requests
from bs4 import BeautifulSoup
AllUniv = []

def getHTMLText(url):
    try:
        r = requests.get(url, timeout=30)
        r.raise_for_status()
        r.encoding ='utf-8'
        return r.text
    except:
        return ""

def fillUnivList(soup):
    data = soup.find_all('tr')
    for tr in data:
        ltd = tr.find_all('td')
        if len(ltd)==0:
            continue
        singleUniv = []
        for td in ltd:
            singleUniv.append(td.string)
        AllUniv.append(singleUniv)

def writercsv(save_road,num,title):
    if os.path.isfile(save_road):
        with open(save_road,'a',newline='')as f:
            csv_write=csv.writer(f,dialect='excel')
            for i in range(num):
                u=AllUniv[i]
                csv_write.writerow(u)
    else:
        with open(save_road,'w',newline='')as f:
            csv_write=csv.writer(f,dialect='excel')
            csv_write.writerow(title)
            for i in range(num):
                u=AllUniv[i]
                csv_write.writerow(u)

title=["排名","学校名称","省市","类型","总分","生源质量","培养结果","科研规模(篇)",\
       "科研质量(FWCI)","顶尖成果(篇)","顶尖人才(人)","科技服务(千元)",\
       "产学研合作(篇)","成果转化(千元)"]
save_road="E:\\排名.csv"
url = 'https://www.shanghairanking.cn/rankings/bcur/202011'
html = getHTMLText(url)
soup = BeautifulSoup(html, "html.parser")
fillUnivList(soup)
writercsv(save_road,30,title)

 

结果:

 爬不了

posted @ 2020-12-14 14:57  英魂  阅读(165)  评论(0)    收藏  举报