使用正则表达式,取得点击次数,函数抽离

1. 用正则表达式判定邮箱是否输入正确。

import re
str=r'^[a-zA-Z0-9]+(\.[a-zA-Z0-9_-]+){0,4}@[a-zA-Z0-9]+(\.[a-zA-Z0-9]+){0,4}$'
are=('598928876@qq.com')
if re.match(str,are):
    print('success')
else:
    print('please input ...')

2. 用正则表达式识别出全部电话号码

tel='版权所有:广州商学院   地址:广州市黄埔区九龙大道206号  学校办公室:020-82876130  招生电话:020-82872773'
a=re.search('(\d{3,4})-(\d{6,8})',tel).group(2)
print(a)

3用正则表达式进行英文分词

str='''Blog garden is a developer oriented knowledge sharing community, are not allowed to release any promotion, advertising, and political aspects of the content'''
print(re.split("[\s,.?!]+",str))

 

4. 使用正则表达式取得新闻编号

5. 生成点击次数的Request URL

6. 获取点击次数

7. 将456步骤定义成一个函数 def getClickCount(newsUrl):

8. 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl):

import requests
import re
from bs4 import BeautifulSoup
from datetime import datetime

def getClickCount(newUrl):
    newId=re.search('\_(.*).html',newUrl).group(1).split('/')[1]
    clickUrl='http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newId)
    return (int(requests.get(clickUrl).text.split('.html')[-1].lstrip("('").rstrip("');")))
def getNewDetail(url):
    res1 = requests.get(a)
    res1.encoding = 'utf-8'
    soup1 = BeautifulSoup(res1.text, 'html.parser')

    soup1.select('#content')[0].text  # 正文
    info = soup1.select('.show-info')[0].text
    d = info.lstrip('发布时间:')[:19] #发布日期和时间

    dt=datetime.strptime(d,'%Y-%m-%d %H:%M:%S')
    au=info[info.find('作者:'):].split()[0].lstrip('作者:') #作者
    clickCount=getClickCount(a)

newUrl="http://news.gzcc.cn/html/2017/xiaoyuanxinwen_0925/8249.html"
newId = re.search('\_(.*).html', newUrl).group(1).split('/')[1]
clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newId)
print(clickUrl,'\n',getClickCount(newUrl))

url='http://news.gzcc.cn/html/xiaoyuanxinwen/'
res=requests.get(url)
res.encoding="utf-8"
soup=BeautifulSoup(res.text,"html.parser")
for news in soup.select("li"):
    if len(news.select(".news-list-title")) > 0:
        t=news.select('.news-list-title')[0].text
        a = news.select('a')[0].attrs['href']  # 新闻链接
        print(t,a,'\n')
        getNewDetail(a)
        break

 

posted @ 2018-04-11 22:55  122建雄  阅读(151)  评论(0编辑  收藏  举报