爬虫综合大作业

可以用pandas读出之前保存的数据:

newsdf = pd.read_csv(r'F:\duym\gzccnews.csv')

一.把爬取的内容保存到数据库sqlite3

import sqlite3
with sqlite3.connect('gzccnewsdb.sqlite') as db:
newsdf.to_sql('gzccnews',con = db)

with sqlite3.connect('gzccnewsdb.sqlite') as db:
df2 = pd.read_sql_query('SELECT * FROM gzccnews',con=db)

import pandas as pd
import pymysql
from sqlalchemy import create_engine
conInfo = "mysql+pymysql://user:@localhost:3306/gzccnews?charset=utf8"
engine = create_engine(conInfo,encoding='utf-8')
df = pd.DataFrame(allnews)
df.to_sql(name = ‘news', con = engine, if_exists = 'append', index = False)
  

保存到MySQL数据库

  • import pandas as pd
  • import pymysql
  • from sqlalchemy import create_engine
  • conInfo = "mysql+pymysql://user:passwd@host:port/gzccnews?charset=utf8"
  • engine = create_engine(conInfo,encoding='utf-8')
  • df = pd.DataFrame(allnews)
  • df.to_sql(name = ‘news', con = engine, if_exists = 'append', index = False)
import pandas as pd
import pymysql
from sqlalchemy import create_engine
conInfo = "mysql+pymysql://root:@localhost:3306/yaoshen?charset=utf8"
engine = create_engine(conInfo,encoding='utf-8')
df = pd.DataFrame(comment)
print(df)
df.to_sql(name ='pinglun', con = engine, if_exists = 'append', index = False)
conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='yaoshen', charset='utf8')  

二.爬虫综合大作业

  1. 选择一个热点或者你感兴趣的主题。
  2. 选择爬取的对象与范围。
  3. 了解爬取对象的限制与约束。
  4. 爬取相应内容。
  5. 做数据分析与文本分析。
  6. 形成一篇文章,有说明、技术要点、有数据、有数据分析图形化展示与说明、文本分析图形化展示与说明。
  7. 文章公开发布。                                                                                                                     

     

爬虫主题:

爬取对象:bilibili(https://www.bilibili.com/)

 

 爬取对象的限制与约束:

通过以下方法避免被封ip:

设置合理的user-agent,模拟成真实的浏览器去提取内容。
首先打开浏览器输入:about:version。
用户代理:
收集一些比较常用的浏览器的user-agent放到列表里面。
然后import random,使用随机获取一个user-agent
定义请求头字典headers={’User-Agen‘:}
发送request.get时,带上自定义了User-Agen的headers

 爬取内容:
爬取了影评的用户名、时间、评论、有用数

 

 

代码如下:

import urllib.request
import requests
from bs4 import BeautifulSoup
def getHtml(url):
    """获取url页面"""
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}
    req = urllib.request.Request(url,headers=headers)
    req = urllib.request.urlopen(req)
    content = req.read().decode('utf-8')
    return content
def getComment(url):
        """解析HTML页面"""
        html = getHtml(url)
        soupComment = BeautifulSoup(html, 'html.parser')
        onePageComments = []
        for comment in soupComment.select('.comment-item'):
            card = {}
            card['用户名'] = comment.select('.comment-info')[0].select('a')[0].text
            card['时间'] = comment.select('.comment-time')[0].text.lstrip().rstrip()
            card['评论'] = comment.select('.short')[0].text
            card['有用数'] = comment.select('.votes')[0].text
            onePageComments.append(card)
        return onePageComments
comment = []
if __name__ == '__main__':
     f = open('我不是药神page10.txt', 'w', encoding='utf-8')
     for page in range(10): # 豆瓣爬取多页评论需要验证。
         url = 'https://movie.douban.com/subject/26752088/comments?start=' + str(20*page) + '&limit=20&sort=new_score&status=P'
         print('第%s页的评论:' % (page+1))
         print(url + '\n')
         comment.extend(getComment(url))
         print(comment)
         for i in range(len(comment)):
             f.write(comment[i]['评论'])
         print('\n')
import matplotlib.pyplot as plt
from wordcloud import WordCloud
from scipy.misc import imread
import jieba
text = open("我不是药神page10.txt","rb").read()
#结巴分词
wordlist = jieba.cut(text,cut_all=True)
wl = " ".join(wordlist)
#print(wl)#输出分词之后的txt
#把分词后的txt写入文本文件
#fenciTxt  = open("fenciHou.txt","w+")
#fenciTxt.writelines(wl)
#fenciTxt.close()
#设置词云
wc = WordCloud(background_color = "white", #设置背景颜色
               mask = imread('shen.jpg'), #设置背景图片
               max_words = 2000, #设置最大显示的字数
               stopwords = ["的", "这种", "这样", "还是", "就是", "这个"], #设置停用词
               font_path = "C:\Windows\Fonts\simkai.ttf", # 设置为楷体 常规
# #设置中文字体,使得词云可以显示(词云默认字体是“DroidSansMono.ttf字体库”,不支持中文)
               max_font_size = 60, #设置字体最大值
               random_state = 30, #设置有多少种随机生成状态,即有多少种配色方案
 )
myword = wc.generate(wl)#生成词云
wc.to_file('result.jpg')
# 展示词云图
plt.imshow(myword)
plt.axis("off")
plt.show()

import pandas as pd
import pymysql
from sqlalchemy import create_engine
conInfo = "mysql+pymysql://root:@localhost:3306/yaoshen?charset=utf8"
engine = create_engine(conInfo,encoding='utf-8')
df = pd.DataFrame(comment)
print(df)
df.to_sql(name ='pinglun', con = engine, if_exists = 'append', index = False)
conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='yaoshen', charset='utf8')

  

 词云显示如下:

总结:

《我不是药神》这部电影主要讲了平凡的神油店老板程勇,在机缘巧合下成为印度仿制药“格列宁”代理商的故事,通过描述程勇与“格列宁”之间的“纠葛”,反映了当时慢粒白血病患者“治病贵、天价药”社会现状。影评的评价极高,与9.0的豆瓣评分相符合,引发人们深刻思考,值得观看。

 

posted @ 2019-05-08 21:03  梁琳  阅读(287)  评论(0编辑  收藏  举报