使用正则表达式,取得点击次数,函数抽离

1. 用正则表达式判定邮箱是否输入正确

r = '^(\w)+(\.\w+)*@(\w)+((\.\w{2,3}){1,3})$'
e = '123456789@qq.com'
if re.match(r,e):
    print(re.match(r,e).group(0))
else:
    print('error!')

2. 用正则表达式识别出全部电话号码。

str = '罗德广的号码020-123456, 艺术大师0759-3877878,你的号码0658-3877456,地址号012648945614685789'
tel = re.findall('(\d{3,4})-(\d{6,8})',str)
print(tel)

3. 用正则表达式进行英文分词。re.split('',news)

news = '''They say a person needs just three things to be truly happy in this world: someone to love, something to do, and something to hope for.'''
words = re.split('[\s,.:;!?"]',news)
print(words)

4. 使用正则表达式取得新闻编号

5. 生成点击次数的Request URL

6. 获取点击次数

7. 将456步骤定义成一个函数 def getClickCount(newsUrl):

import requests
import re

newsUrl = 'http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/9183.html'

def getClickCount(newsUrl):
    newsId = re.search('\_(.*).html',newsUrl).group(1).split('/')[-1]
    res = requests.get('http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newsId))
    return(int(res.text.split('.html')[-1].lstrip("(')").rstrip("');")))
print(getClickCount(newsUrl))

8. 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl)

def getNewDetail(newsUrl):
    resd = requests.get(newsUrl)
    resd.encoding = 'utf-8'
    soupd = BeautifulSoup(resd.text, 'html.parser')

    title = soupd.select('.show-title')[0].text
    info = soupd.select('.show-info')[0].text
    dt = datetime.strptime(info.lstrip('发布时间:')[0:19], '%Y-%m-%d %H:%M:%S')
    if info.find('作者:') > 0:
        wr = info[info.find('作者:'):info.find('审核:')].lstrip('作者:').split()[0]
    else:
        wr = 'none'
    if info.find('摄影:') > 0:
        ph = info[info.find('摄影:'):].split()[0].lstrip('摄影:')
    else:
        ph = 'none'
    if info.find('来源:')>0:
        source = info[info.find('来源:'):].split()[0].lstrip('来源:')
    else:
        source = 'none'
    content = soupd.select('.show-content')[0].text.strip()
    click = getClickCount(newsUrl)
    print('发布时间:',dt,'标题:',title,'链接:',newsUrl,'来源:',source,'作者:',wr,'摄影:',ph,'点击次数:',click)

9. 取出一个新闻列表页的全部新闻 包装成函数def getListPage(pageUrl):

newsurl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'

def getListPage(newsurl):
    res = requests.get(newsurl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    for new in soup.select('li'):
        if len(new.select('.news-list-title'))>0:
            newsUrl = new.select('a')[0].attrs['href']
            getNewDetail(newsUrl)
getListPage(newsurl)

10. 获取总的新闻篇数,算出新闻总页数包装成函数def getPageN():

def getPageN():
    res = requests.get('http://news.gzcc.cn/html/xiaoyuanxinwen/')
    res.encoding = "utf-8"
    soup = BeautifulSoup(res.text, 'html.parser')
    n = int(soup.select('#pages')[0].select('a')[0].text.rstrip('条'))
    return (n // 10 + 1)

11. 获取全部新闻列表页的全部新闻详情。

getListPage(newsurl)
n = getPageN()
for i in range(n-1, n+1):
    print(i)
    listPageUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i)
    getListPage(listPageUrl)

  

 

posted @ 2018-04-10 23:03  203陈冠权  阅读(103)  评论(0编辑  收藏  举报