一个完整的大作业

设计内容: 选取广州商学院新闻网作为研究对象,爬取网站页面当中的有关新闻的标题、发布时间与链接数据,数据分析以词云方式展示出来,最后分析数据方面的存储。

1.选一个自己感兴趣的主题:

选取广州商学院新闻网作为研究对象,爬取网站页面当中的有关新闻的标题、发布时间与链接数据:

 

网络上爬取相关的数据:

import requests
from bs4 import BeautifulSoup
res=requests.get('http://news.gzcc.cn/html/xiaoyuanxinwen/')
res.encoding='utf-8'
soup=BeautifulSoup(res.text,'html.parser')
 
for news in soup.select('li'):
    if len(news.select('.news-list-title'))>0:
      title=news.select('.news-list-title')[0].text
      url=news.select('a')[0]['href']
      time=news.select('.news-list-info')[0].contents[0].text
      source=news.select('.news-list-info')[0].contents[1].text
      print(title,url,time,source)

 爬取结果如下:

 

 

2.进行文本分析,生成词云:

import requests
import jieba
from bs4 import BeautifulSoup
import re

url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
res = requests.get(url)
res.encoding='utf-8'   
soup=BeautifulSoup(res.text,'html.parser')


for news in soup.select('li'):
    if len(news.select('.news-list-title'))>0:
      url=news.select('a')[0]['href']
      title=news.select('.news-list-title')[0].text

        resd=requests.get(url)
        resd.encoding='utf-8'
        soupd=BeautifulSoup(resd.text,'html.parser')
        p = soupd.select('.news-list-info')[0].text
        break
    
words = jieba.lcut(p)
ls = []
counts = {}
for word in words:
    ls.append(word)
    if len(word) == 1:
        continue
    else:
        counts[word] = counts.get(word,0)+1
items = list(counts.items())
items.sort(key = lambda x:x[1], reverse = True)
for i in range(10):
    word , count = items[i]
    print ("{:<5}{:>2}".format(word,count))

from wordcloud import WordCloud
import matplotlib.pyplot as plt    
cy = WordCloud(font_path='msyh.ttc').generate(p)
plt.imshow(cy, interpolation='bilinear')
plt.axis("off")
plt.show()

  

 生成词云如下图:

 

对文本分析结果解释说明:

对爬取的数据信息以词云方式展示出来,让人对重要信息有清晰的认识。

 

3.数据结构化分析:

转换成pandas的数据结构DataFrame,将爬取信息量从DataFrame保存到excel,结果如下图:

import requests
import re
import pandas
from bs4 import BeautifulSoup
import sqlite3

url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
res = requests.get(url)
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')

def getclick(newurl):
    id = re.search('_(.*).html', newurl).group(1).split('/')[1]
    clickurl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id)
    click = int(requests.get(clickurl).text.split(".")[-1].lstrip("html('").rstrip("');"))
    return click


def getdetail(listurl):
    res = requests.get(listurl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    news={}
    news['url']=url
    news['title']=soup.select('.show-title')[0].text
    info = soup.select('.show-info')[0].text
    #news['dt']=datetime.strptime(info.lstrip('发布时间')[0:19],'%Y-%m-%d %H:%M:')
    #news['source']=re.search('来源:(.*)点击',info).group(1).strip()
    news['content']=soup.select('.show-content')[0].text.strip()
    news['click']=getclick(listurl)
    return (news)

def onepage(pageurl):
    res = requests.get(pageurl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    newsls=[]
    for news in soup.select('li'):
        if len(news.select('.news-list-title')) > 0:
            newsls.append(getdetail(news.select('a')[0]['href']))
    return(newsls)
newstotal=[]
for i in range(2,3):
    listurl='http://news.gzcc.cn/html/xiaoyuanxinwen/'
    newstotal.extend(onepage(listurl))
df =pandas.DataFrame(newstotal)
df.to_excel('gzccnews.xlsx')

with sqlite3.connect('gzccnews_db.sqlite') as db:
   df.to_sql('news_table',con = db)

 

 

 将爬取信息从DataFrame保存到sqlite3数据库,输入显示:

import sqlite3
import pandas
with sqlite3.connect('gzccnews_db.sqlite')as db:
	df8=pandas.read_sql_query('SELECT*FROM news_table',con=db)

结果如下: 

 

posted on 2017-10-30 23:42  30李国春  阅读(252)  评论(0)    收藏  举报

导航