Python3爬虫(十二) 爬虫性能

 Infi-chu:

http://www.cnblogs.com/Infi-chu/

一、简单的循环串行
一个一个循环,耗时是最长的,是所有的时间综合

import requests
url_list = [
    'http://www.baidu.com',
    'http://www.pythonsite.com',
    'http://www.cnblogs.com/'
]

for url in url_list:
    result = requests.get(url)
    print(result.text)

二、通过线程池
整体耗时是所有连接里耗时最久的那个,相对于循环来说快了不少

import requests
from concurrent.futures import ThreadPoolExecutor

def fetch_request(url):
    result = requests.get(url)
    print(result.text)

url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]
pool = ThreadPoolExecutor(10)

for url in url_list:
    #去线程池中获取一个线程,线程去执行fetch_request方法
    pool.submit(fetch_request,url)

pool.shutdown(True)

三、线程池+回调函数
定义了一个回调函数

from concurrent.futures import ThreadPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)

    return response


def callback(future):
    print(future.result().text)


url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]

pool = ThreadPoolExecutor(5)

for url in url_list:
    v = pool.submit(fetch_async,url)
    #这里调用回调函数
    v.add_done_callback(callback)

pool.shutdown()

四、通过进程池
进程池的方式访问,同样的也是取决于耗时最长的,但是相对于线程来说,进程需要耗费更多的资源,同时这里是访问url时IO操作,所以这里线程池比进程池更好

import requests
from concurrent.futures import ProcessPoolExecutor

def fetch_request(url):
    result = requests.get(url)
    print(result.text)

url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]
pool = ProcessPoolExecutor(10)

for url in url_list:
    #去进程池中获取一个线程,子进程程去执行fetch_request方法
    pool.submit(fetch_request,url)

pool.shutdown(True)

五、进程池+回调函数
这种方式和线程+回调函数的效果是一样的,相对来说开进程比开线程浪费资源

from concurrent.futures import ProcessPoolExecutor
import requests


def fetch_async(url):
    response = requests.get(url)

    return response


def callback(future):
    print(future.result().text)


url_list = [
    'http://www.baidu.com',
    'http://www.bing.com',
    'http://www.cnblogs.com/'
]

pool = ProcessPoolExecutor(5)

for url in url_list:
    v = pool.submit(fetch_async, url)
    # 这里调用回调函数
    v.add_done_callback(callback)

pool.shutdown()

 

posted @ 2018-05-03 15:12  Infi_chu  阅读(659)  评论(0编辑  收藏  举报