数据结构化和保存

1. 将新闻的正文内容保存到文本文件。

newscontent=soup.select('.show-content')[0].text
f=open('new.txt','w')
f.write(newscontent)
f=open('new.txt','r')
print(f.read())

2. 将新闻数据结构化为字典的列表:

单条新闻的详情-->字典news
一个列表页所有单条新闻汇总-->列表newsls.append(news)
所有列表页的所有新闻汇总列表newstotal.extend(newsls)

import requests
from bs4 import BeautifulSoup
import re
import pandas
firstpage='http://news.gzcc.cn/html/xiaoyuanxinwen/'
url='http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0411/9205.html'
res = requests.get(firstpage)
res.encoding = 'utf-8'
soup1 = BeautifulSoup(res.text, 'html.parser')
newscount = int(soup1.select('.a1')[0].text.rstrip('条'))
newcount1 = newscount // 10 + 1
allnews=[]
def getallnews(url,allnews):
res = requests.get(url)
res.encoding = 'utf-8'
soup1 = BeautifulSoup(res.text, 'html.parser')
news=soup1.select('.news-list')[0].select('li')
for i in news:
news1=i.a.attrs['href']
title=gettitle(news1)
datetime=getdatetime(news1)
sorce=getsource(news1)
newsurl3 = re.search('(\d{2,}\.html)', news1).group(1)
newsurl4 = newsurl3.rstrip('.html')
newid = 'http://oa.gzcc.cn/api.php?op=count&id=' + newsurl4 + '&modelid=80'
clickcount=getclickcount(newid)
dictionary={}
dictionary['clickcount'] = clickcount
dictionary['title']=title
dictionary['datetime']=datetime
dictionary['source']=sorce
allnews.append(dictionary)
return allnews
def getclickcount(newurl):
res=requests.get(newurl)
res.encoding='utf-8'
soup=BeautifulSoup(res.text,'html.parser').text
click=soup.split('.html')
res5 = int(click[-1].lstrip("('").rstrip("');"))
return res5
def gettitle(newsurl):
res=requests.get(newsurl)
res.encoding='utf-8'
soup = BeautifulSoup(res.text, 'html.parser')
title=soup.select('.show-title')[0].text
return title
def getdatetime(newurl):
res=requests.get(newurl)
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')
t3 = soup.select('.show-info')[0].text
t4 = t3.split()
t5 = t4[0].lstrip('发布时间:')
datetime1 = t5 + ' ' + t4[1]
return datetime1
def getsource(newurl):
res=requests.get(newurl)
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')
t3 = soup.select('.show-info')[0].text
t4 = t3.split()
t5=t4[4].lstrip('来源:')
return t5
for i in range(2,6):
pageurl='http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i)
hao=getallnews(pageurl,allnews)
df = pandas.DataFrame(hao)
print(df)


3. 安装pandas,用pandas.DataFrame(newstotal),创建一个DataFrame对象df.

df = pandas.DataFrame(hao)
print(df)

4. 通过df将提取的数据保存到csv或excel 文件。

df.to_excel('text.xlsx')

5. 用pandas提供的函数和方法进行数据分析:

提取包含点击次数、标题、来源的前6行数据

print(df.head(6))
提取‘学校综合办’发布的,‘点击次数’超过3000的新闻。

wa=df[(df['clickcount']>2000)&(df['source']=='学校综合办')]
print(wa)

提取'国际学院'和'学生工作处'发布的新闻。

sorcelist=['国际学院','学校工作处']
specialnews=df[df['source'].isin(sorcelist)]
print(specialnews)
specialnews.to_excel('hello.xlsx')

posted @ 2018-04-12 11:34  161蔡瑞奇  阅读(152)  评论(0编辑  收藏  举报