获取一篇新闻的全部信息

作业要求:https://edu.cnblogs.com/campus/gzcc/GZCC-16SE2/homework/2894

给定一篇新闻的链接newsUrl,获取该新闻的全部信息

标题、作者、发布单位、审核、来源

发布时间:转换成datetime类型

点击:

  • newsUrl
  • newsId(使用正则表达式re)
  • clickUrl(str.format(newsId))
  • requests.get(clickUrl)
  • newClick(用字符串处理,或正则表达式)
  • int()

整个过程包装成一个简单清晰的函数。

 主要代码:

# -*- coding: utf-8 -*-
"""
Created on Mon Apr  1 10:28:21 2019

@author: Administrator
"""
# 获取一篇新闻的全部信息
import re
import requests
from bs4 import BeautifulSoup
from datetime import datetime


# 获取点击的次数
def click(url):
    clickurl = 'http://oa.gzcc.cn/api.php?op=count&id=11033&modelid=80'
    res = requests.get(clickurl)
    click = res.text.split('.html')[-1].lstrip("('").rstrip("');")
    return click


# 获取新闻编号
def newsnum(url):
    new = re.match('http://news.gzcc.cn/html/2019/meitishijie_0321/(.*).html', url).group(1)
    return new


# 时间
def newstime(url, soup):
    '''
    time=soup.select('.show-info')[0].text[5:24]
    time=soup.select('.show-info')[0].text.split()[0].lstrip('发布时间')
    '''
    newsdate = soup.select('.show-info')[0].text.split()[0].split(':')[1]
    newstime = soup.select('.show-info')[0].text.split()[1]
    time = newsdate + ' ' + newstime
    time = datetime.strptime(time, '%Y-%m-%d %H:%M:%S')
    return time


def news(url):
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    title = soup.select('.show-title')[0].text  # 标题
    author = soup.select('.show-info')[0].text.split()[2]  # 作者
    auditor = soup.select('.show-info')[0].text.split()[3]  # 审核
    comefrom = soup.select('.show-info')[0].text.split()[4]  # 来源
    detail = soup.select('.show-content p')[0].text  # 内容
    newsid = newsnum(url)  # 新闻编号id
    time = newstime(url, soup)  # 发布时间
    clicktime = click(url)  # 点击次数
    p = print(newsid, title, time, author, auditor, comefrom, detail, clicktime)
    return p

url = "http://news.gzcc.cn/html/2019/meitishijie_0321/11033.html"
news(url)

运行结果:

 

posted on 2019-04-03 21:35  冷冻  阅读(106)  评论(0编辑  收藏  举报

导航