pandas笔记:ch06数据加载、存储于文件格式
ch06
数据加载、存储于文件格式¶
In [1]:
from __future__ import division
from numpy.random import randn
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)
In [2]:
%pwd
Out[2]:
读写文本格式的数据¶
表6-1:pandas中的解析函数¶
read_csv 从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为逗号 read_table 从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为制表符('\t') read_fwf 读取定宽格式数据(也就是说,没有分隔符) read_clipboard 读取剪贴板的数据,可以看做read_table的剪贴板版。在将网页转化为表格时很有用
In [19]:
!type ch06\ex1.csv
In [20]:
df = pd.read_csv('ch06/ex1.csv')
df
Out[20]:
In [21]:
'read_table默认sep="\t",因此使用它需要制定分隔符sep=","'
pd.read_table('ch06/ex1.csv', sep=',')
Out[21]:
In [23]:
'没有标题行的文件'
!type ch06\ex2.csv
In [24]:
'没有标题行文件:分配默认列名'
pd.read_csv('ch06/ex2.csv', header=None)
Out[24]:
In [25]:
'没有标题行文件:自定义列名'
pd.read_csv('ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
Out[25]:
In [26]:
'指定某列为索引'
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('ch06/ex2.csv', names=names, index_col='message')
Out[26]:
In [27]:
!type ch06\csv_mindex.csv
In [28]:
'传入多列作为层次化索引'
parsed = pd.read_csv('ch06/csv_mindex.csv', index_col=['key1', 'key2'])
parsed
Out[28]:
In [29]:
list(open('ch06/ex3.txt'))
Out[29]:
In [32]:
'使用正则表达式去除\n,这里 \s+ 是正则表达式的用法'
result = pd.read_table('ch06/ex3.txt', sep='\s+')
result
'A,B,C的数量为3,而列的数量为4,因此,pandas推断第一列为索引'
Out[32]:
In [31]:
'skiprows:跳过第一行,第三行,第四行'
!type ch06\ex4.csv
pd.read_csv('ch06/ex4.csv', skiprows=[0, 2, 3])
Out[31]:
In [33]:
!type ch06\ex5.csv
result = pd.read_csv('ch06/ex5.csv')
result
Out[33]:
In [34]:
'isnull:判断数值是否为NaN'
pd.isnull(result)
Out[34]:
In [35]:
'na_values=:接受列表——方式1'
result = pd.read_csv('ch06/ex5.csv', na_values=['NULL'])
result
Out[35]:
In [40]:
'na_values=:接受列表——方式2'
result = pd.read_csv('ch06/ex5.csv', na_values=['world','foo'])
result
Out[40]:
In [37]:
'na_values=:接受字典,为各列指定不同的Na标记值'
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('ch06/ex5.csv', na_values=sentinels)
Out[37]:
表6-2:read_csv/read_table函数的参数¶
path 表示文件系统位置、URL、文件型对象的字符串 sep,delimiter 用于对行各字段进行拆分的字符序列或正则表达式 header 用作列明的行号。默认为0(第一行),如果没有header行就应该设置为None index_col 用作行索引的列编号或列名。可以是单个名称/数字或由多个名称/数字组成的列表(层次化索引) names 用于结果的列名列表,结合header = None(默认会附加这么设置:header = None) skiprows 需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始) na_values 一组用于替换NA的值 comment 用于将注释信息从行尾拆分出去的字符(一个或多个) parse_dates 尝试将数据解析为日期,默认为False。如果为True,则尝试解析所有列。此外,还可以指定需要解析的一组列号或列名。如果列表的元素为列表或 元组,就会将多个列组合到一起在进行日期解析工作(例如,日期/时间分别位于两个列中) keep_date_col 如果连接多列解析日期,则保持参与连接的列。默认为False。 converters 由列号/列名跟函数之间的映射关系组成的字典。例如,{'foo':f}会对foo列的所有值应用函数f dayfirst 当解析有歧义的日期时,将其看做国际格式(例如,7/6/2012→June 7,2012)。默认为False date_parser 用于解析日期的函数 nrows 需要读取的行数(从文件开始处算起) iterator 返回一个TextParser以便逐块读取文件 chunksize 文件块的大小(用于迭代) skip_footer 需要忽略的行数(从文件末尾处算起) verbose 打印各种解析器输出信息,比如‘非数值列中缺失值的数量’等 encoding 用于Unicode的文本编码格式。例如,‘utf-’表示用UTF-8编码的文本 squeeze 如果数据经解析后仅含一列,则返回Series thousands 千分位分隔符,如‘,’或‘.’
逐块获取文本文件¶
In [4]:
result = pd.read_csv('ch06/ex6.csv')
result.head(10)
Out[4]:
In [42]:
'传入nrows获取指定行数数据'
pd.read_csv('ch06/ex6.csv', nrows=5)
Out[42]:
In [43]:
'传入chunksize:获取区域数据:其本质上返回该区域的迭代器'
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
chunker
Out[43]:
In [69]:
chunker = pd.read_csv('ch06/ex6.csv', chunksize=573)
i = 0
'每次调用chunker,就会以文件上次处理后的位置为开始进行迭代'
for each in chunker:
i+=1
print(i,end=' ')
In [58]:
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
tot = Series([])
#suma = 3
for piece in chunker:
'''suma += 1
if suma > 5:
break
print(piece)
'''
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
In [70]:
piece['key'].value_counts()
Out[70]:
In [60]:
tot[:10]
Out[60]:
将数据写出到文本格式¶
In [71]:
data = pd.read_csv('ch06/ex5.csv')
data
Out[71]:
In [72]:
data.to_csv('ch06/out.csv')
!type ch06\out.csv
In [73]:
'sys.stdout: 将数据输出到当前屏幕'
'sep= 修改字符串之间的间隔符'
data.to_csv(sys.stdout, sep='|')
In [74]:
'na_rep= 修改缺失值的填充形式'
data.to_csv(sys.stdout, na_rep='NULL')
In [75]:
'忽略行列标签'
data.to_csv(sys.stdout, index=False, header=False)
In [76]:
'只读取部分列'
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
In [77]:
'Series的 to_csv的方法:'
dates = pd.date_range('1/1/2000', periods=7)
ts = Series(np.arange(7), index=dates)
ts.to_csv('ch06/tseries.csv')
!type ch06\tseries.csv
In [78]:
'Series的from_csv方法'
'read_csv返回DataFrame,如果要返回Series,需要用from_csv'
Series.from_csv('ch06/tseries.csv', parse_dates=True)
Out[78]:
In [5]:
'squeeze=True: 也可以让pd.read_csv() 返回 Series'
pd.read_csv('ch06/tseries.csv',squeeze=True)
Out[5]:
手工处理分隔符的格式¶
csv模块
In [79]:
!type ch06\ex7.csv
In [7]:
'把已打开的文件或者文件对象传递给 csv.reader'
import csv
f = open('ch06/ex7.csv')
'csv.reader本质上返回一个迭代器'
reader = csv.reader(f)
In [8]:
for line in reader:
print(line)
In [9]:
'对数据格式进行处理'
lines = list(csv.reader(open('ch06/ex7.csv')))
header, values = lines[0], lines[1:]
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
Out[9]:
In [11]:
'通过继承csv.Dialect,定义专有的分隔符处理工具'
class my_dialect(csv.Dialect):
lineterminator = '\n'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
!type ch06\ex7.csv
In [12]:
f = open('ch06/ex7.csv')
reader = csv.reader(f,dialect=my_dialect)
list(reader)
Out[12]:
In [88]:
'也可以对 csv.reader 使用关键字参数,而不用定义子类'
f = open('ch06/ex7.csv')
reader = csv.reader(f,delimiter='|')
list(reader)
Out[88]:
表6-3:csv语句支选项¶
delimiter 用于分隔字段的单字符字符串。默认为‘,’ lineterminator 用于写操作的行结束符,默认为‘\r\n’。读操作将忽略此选项,他能认出跨平台的行结束符 quotechar 用于带有特殊字符(如分隔符)的字段的引用符号。默认为‘"’ quoting 引用约定。可选值包括csv.QUOTE_ALL(引用所有字段),csv.QUOTE_MINIMAL(只引用带有诸如分隔符之类特殊字符的字段), csv.QUOTE_NONNUMERIC以及csv.QUOTE_NON(不引用)。完整信息请参考Python的文档,默认为csv.QUOTE_MINIMAL skipinitialspace 忽略分隔符后面的空白符。默认为False doublequote 如何处理字段内的引用符号。如果为True,则双写。完整信息及行为请参见在线文档 escapechar 用于对分隔符进行转义的字符串(如果quoting被设置为csv.QUOTE_NONE的话)。默认禁用
In [89]:
with open('mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
In [91]:
!type mydata.csv
JSON 数据¶
json(JavaScript Object Notation) json非常接近有效的Python代码:基本类型有对象(字典)、数组(列表)、字符串、数值、布尔值以及NULL。对象中的所有的键都必须是字符串。
In [93]:
obj = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""
In [94]:
'loads:将json字符串转换成Python形式'
import json
result = json.loads(obj)
result
Out[94]:
In [95]:
'dumps:将Python对象转换成json数据'
asjson = json.dumps(result)
In [96]:
'把json对象转化成DataFrame或其他分析数据结构就有你决定了:'
siblings = DataFrame(result['siblings'], columns=['name', 'age'])
siblings
Out[96]:
XML and HTML, Web scraping(略)¶
NB. The Yahoo! Finance API has changed and this example no longer works
In [ ]:
from lxml.html import parse
from urllib2 import urlopen
parsed = parse(urlopen('http://finance.yahoo.com/q/op?s=AAPL+Options'))
doc = parsed.getroot()
In [ ]:
links = doc.findall('.//a')
links[15:20]
In [ ]:
lnk = links[28]
lnk
lnk.get('href')
lnk.text_content()
In [ ]:
urls = [lnk.get('href') for lnk in doc.findall('.//a')]
urls[-10:]
In [ ]:
tables = doc.findall('.//table')
calls = tables[9]
puts = tables[13]
In [ ]:
rows = calls.findall('.//tr')
In [ ]:
def _unpack(row, kind='td'):
elts = row.findall('.//%s' % kind)
return [val.text_content() for val in elts]
In [ ]:
_unpack(rows[0], kind='th')
_unpack(rows[1], kind='td')
In [ ]:
from pandas.io.parsers import TextParser
def parse_options_data(table):
rows = table.findall('.//tr')
header = _unpack(rows[0], kind='th')
data = [_unpack(r) for r in rows[1:]]
return TextParser(data, names=header).get_chunk()
In [ ]:
call_data = parse_options_data(calls)
put_data = parse_options_data(puts)
call_data[:10]
Parsing XML with lxml.objectify(略)¶
In [ ]:
%cd ch06/mta_perf/Performance_XML_Data
In [ ]:
!head -21 Performance_MNR.xml
In [ ]:
from lxml import objectify
path = 'Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
In [ ]:
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)
In [ ]:
perf = DataFrame(data)
perf
In [ ]:
root
In [ ]:
root.get('href')
In [ ]:
root.text
二进制数据格式¶
In [97]:
frame = pd.read_csv('ch06/ex1.csv')
frame
Out[97]:
In [100]:
'to_pickle:把数据以pickle形式保存到磁盘上'
frame.to_pickle('ch06/frame_pickle')
In [101]:
'read_pickle:读取磁盘上的pickle数据'
pd.read_pickle('ch06/frame_pickle')
Out[101]:
Using HDF5 format(略)¶
In [ ]:
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
In [ ]:
store['obj1']
In [ ]:
store.close()
os.remove('mydata.h5')
Interacting with HTML and Web APIs(略)¶
In [ ]:
import requests
url = 'https://api.github.com/repos/pydata/pandas/milestones/28/labels'
resp = requests.get(url)
resp
In [ ]:
data[:5]
In [ ]:
issue_labels = DataFrame(data)
issue_labels
Interacting with databases(略)¶
In [ ]:
import sqlite3
query = """
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);"""
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()
In [ ]:
data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
In [ ]:
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
In [ ]:
cursor.description
In [ ]:
DataFrame(rows, columns=zip(*cursor.description)[0])
In [ ]:
import pandas.io.sql as sql
sql.read_sql('select * from test', con)

浙公网安备 33010602011771号