python读取大文件处理方式

一.前言

我们在处理小的文本文件时一般使用.read().readline() 和 .readlines(),当我们的文件有10个G甚至更大时,用上面的方法内存就直接爆掉了。

二.解决办法

1.看到文件这么大,我们的第一反应都是把文件分割成小块的读取不就好了吗

def read_in_chunks(filePath, chunk_size=1024*1024):
    """
    Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1M
    You can set your own chunk size 
    """
    file_object = open(filePath)
    while True:
        chunk_data = file_object.read(chunk_size)
        if not chunk_data:
            break
        yield chunk_data
if __name__ == "__main__":
    filePath = './path/filename'
    for chunk in read_in_chunks(filePath):
        process(chunk) # <do something with chunk>

2.使用with open()

#If the file is line based
with open(...) as f:
    for line in f:
        process(line) # <do something with line>

3.fileinput处理

import fileinput
for line in fileinput.input(['sum.log']):
    print line

 

 

参考:http://chenqx.github.io/2014/10/29/Python-fastest-way-to-read-a-large-file/

         http://www.zhidaow.com/post/python-read-big-file

 

posted @ 2016-10-11 16:07 天外飞仙丶 阅读(...) 评论(...) 编辑 收藏