pandas笔记-ch07:数据规整化:清理、转换、合并、重塑

 

ch07

 

 

 

第七章:数据规整化:清理、转换、合并、重塑

In [1]:
from __future__ import division
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas
import pandas as pd
np.set_printoptions(precision=4, threshold=500)
pd.options.display.max_rows = 100
In [2]:
%matplotlib inline
 

合并数据集

 

数据库风格的 DataFrame 合并

In [16]:
df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
                 'data1': range(7)})
df2 = DataFrame({'key': ['a', 'b', 'd'],
                 'data2': range(3)})
df1
Out[16]:
 data1key
0 0 b
1 1 b
2 2 a
3 3 c
4 4 a
5 5 a
6 6 b
In [4]:
df2
Out[4]:
 data2key
0 0 a
1 1 b
2 2 d
In [7]:
'默认以重叠的列名进行合并'
pd.merge(df1, df2)
Out[7]:
 data1keydata2
0 0 b 1
1 1 b 1
2 6 b 1
3 2 a 0
4 4 a 0
5 5 a 0
In [6]:
'on= 最好显式的指定一下要合并的列名:'
pd.merge(df1, df2, on='key')
Out[6]:
 data1keydata2
0 0 b 1
1 1 b 1
2 6 b 1
3 2 a 0
4 4 a 0
5 5 a 0
In [8]:
'left_on=,right_on= :要合并的列有不同的名称 可以用来分开指定'
df3 = DataFrame({'lkey': ['b', 'b', 'a', 'c', 'a', 'a', 'b'],
                 'data1': range(7)})
df4 = DataFrame({'rkey': ['a', 'b', 'd'],
                 'data2': range(3)})
pd.merge(df3, df4, left_on='lkey', right_on='rkey')
Out[8]:
 data1lkeydata2rkey
0 0 b 1 b
1 1 b 1 b
2 6 b 1 b
3 2 a 0 a
4 4 a 0 a
5 5 a 0 a
In [11]:
'how= 默认inner,即交集合并,其他还有 left right outer(=left + right)'
pd.merge(df1, df2, how='outer')
Out[11]:
 data1keydata2
0 0 b 1
1 1 b 1
2 6 b 1
3 2 a 0
4 4 a 0
5 5 a 0
6 3 c NaN
7 NaN d 2
In [17]:
pd.merge(df1, df2, on='key', how='left')
Out[17]:
 data1keydata2
0 0 b 1
1 1 b 1
2 2 a 0
3 3 c NaN
4 4 a 0
5 5 a 0
6 6 b 1
In [18]:
'多对多的合并:产生的是行的笛卡尔积。例:df1: b*3, df2: b*2, 则 pd.merge(df1,df2): b*3*2'
df1 = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
                 'data1': range(6)})
df2 = DataFrame({'key': ['a', 'b', 'a', 'b', 'd'],
                 'data2': range(5)})
In [19]:
df1
Out[19]:
 data1key
0 0 b
1 1 b
2 2 a
3 3 c
4 4 a
5 5 b
In [20]:
df2
Out[20]:
 data2key
0 0 a
1 1 b
2 2 a
3 3 b
4 4 d
In [22]:
pd.merge(df1, df2, how='inner')
Out[22]:
 data1keydata2
0 0 b 1
1 0 b 3
2 1 b 1
3 1 b 3
4 5 b 1
5 5 b 3
6 2 a 0
7 2 a 2
8 4 a 0
9 4 a 2
In [21]:
'连接方式只影响出现在结果中的键'
pd.merge(df1, df2, on='key', how='left')
Out[21]:
 data1keydata2
0 0 b 1
1 0 b 3
2 1 b 1
3 1 b 3
4 2 a 0
5 2 a 2
6 3 c NaN
7 4 a 0
8 4 a 2
9 5 b 1
10 5 b 3
In [23]:
'多个键进行合并:传入要合并的列名组成的列表'
left = DataFrame({'key1': ['foo', 'foo', 'bar'],
                  'key2': ['one', 'two', 'one'],
                  'lval': [1, 2, 3]})
right = DataFrame({'key1': ['foo', 'foo', 'bar', 'bar'],
                   'key2': ['one', 'one', 'one', 'two'],
                   'rval': [4, 5, 6, 7]})
pd.merge(left, right, on=['key1', 'key2'], how='outer')
Out[23]:
 key1key2lvalrval
0 foo one 1 4
1 foo one 1 5
2 foo two 2 NaN
3 bar one 3 6
4 bar two NaN 7
In [24]:
pd.merge(left, right, on='key1')
Out[24]:
 key1key2_xlvalkey2_yrval
0 foo one 1 one 4
1 foo one 1 one 5
2 foo two 2 one 4
3 foo two 2 one 5
4 bar one 3 one 6
5 bar one 3 two 7
In [25]:
'suffixes= 对 非合并的列 含有重复列名的处理'
pd.merge(left, right, on='key1', suffixes=('_left', '_right'))
Out[25]:
 key1key2_leftlvalkey2_rightrval
0 foo one 1 one 4
1 foo one 1 one 5
2 foo two 2 one 4
3 foo two 2 one 5
4 bar one 3 one 6
5 bar one 3 two 7
 

表7-1:merge函数的参数

left   参与合并的左侧DataFrame  
right   参与合并的右侧DataFrame    
how     'inner','outer','left','right'其中之一。默认为'inner'  
on     用于连接的列名。必须存在于左右两个DataFrame对象中。如果未指定,其他连接键也未指定,则以left和right列名的交集作为连接键  
left_on  左侧DataFrame中用作连接键的列    
right_on 右侧DataFrame中用作连接键的列  
left_index 将左侧的行索引用作其连接键  
right_index 类似于left_index  
sort    根据连接键对合并后的数据进行排序,默认为True。有时在处理大数据集时,禁用该选项可获得更好的性能  
suffixes 字符串值元组,用于追加到重叠列名的末尾,默认为('_x','_y')。例如,如果左右两个DataFrame对象都有‘data’,则结果中就会出                  现‘data_x’和‘data_y’   
copy    设置为False,可以在某些特殊情况下避免将数据复制到结果数据结构中。默认总是复制
 
这部分是测试的内容可以忽略
var canvas = document.getElementById("canvas");

var context = canvas.getContext("2d");
 

索引上的合并

传入left_index= 或者 right_index= 可以指定一方或者两方使用索引作为合并键

In [26]:
left1 = DataFrame({'key': ['a', 'b', 'a', 'a', 'b', 'c'],
                  'value': range(6)})
right1 = DataFrame({'group_val': [3.5, 7]}, index=['a', 'b'])
In [27]:
left1
Out[27]:
 keyvalue
0 a 0
1 b 1
2 a 2
3 a 3
4 b 4
5 c 5
In [28]:
right1
Out[28]:
 group_val
a 3.5
b 7.0
In [31]:
'指定右侧用索引作为合并键'
pd.merge(left1, right1, left_on='key', right_index=True)
Out[31]:
 keyvaluegroup_val
0 a 0 3.5
2 a 2 3.5
3 a 3 3.5
1 b 1 7.0
4 b 4 7.0
In [30]:
pd.merge(left1, right1, left_on='key', right_index=True, how='outer')
Out[30]:
 keyvaluegroup_val
0 a 0 3.5
2 a 2 3.5
3 a 3 3.5
1 b 1 7.0
4 b 4 7.0
5 c 5 NaN
In [33]:
'层次化的索引的数据合并'
lefth = DataFrame({'key1': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
                   'key2': [2000, 2001, 2002, 2001, 2002],
                   'data': np.arange(5.)})
righth = DataFrame(np.arange(12).reshape((6, 2)),
                   index=[['Nevada', 'Nevada', 'Ohio', 'Ohio', 'Ohio', 'Ohio'],
                          [2001, 2000, 2000, 2000, 2001, 2002]],
                   columns=['event1', 'event2'])
lefth
Out[33]:
 datakey1key2
0 0 Ohio 2000
1 1 Ohio 2001
2 2 Ohio 2002
3 3 Nevada 2001
4 4 Nevada 2002
In [34]:
righth
Out[34]:
  event1event2
Nevada2001 0 1
2000 2 3
Ohio2000 4 5
2000 6 7
2001 8 9
2002 10 11
In [35]:
'解决方法:把要合并的多个列组成列表传递给对应参数'
pd.merge(lefth, righth, left_on=['key1', 'key2'], right_index=True)
Out[35]:
 datakey1key2event1event2
0 0 Ohio 2000 4 5
0 0 Ohio 2000 6 7
1 1 Ohio 2001 8 9
2 2 Ohio 2002 10 11
3 3 Nevada 2001 0 1
In [36]:
pd.merge(lefth, righth, left_on=['key1', 'key2'],
         right_index=True, how='outer')
Out[36]:
 datakey1key2event1event2
0 0 Ohio 2000 4 5
0 0 Ohio 2000 6 7
1 1 Ohio 2001 8 9
2 2 Ohio 2002 10 11
3 3 Nevada 2001 0 1
4 4 Nevada 2002 NaN NaN
4 NaN Nevada 2000 2 3
In [37]:
left2 = DataFrame([[1., 2.], [3., 4.], [5., 6.]], index=['a', 'c', 'e'],
                 columns=['Ohio', 'Nevada'])
right2 = DataFrame([[7., 8.], [9., 10.], [11., 12.], [13, 14]],
                   index=['b', 'c', 'd', 'e'], columns=['Missouri', 'Alabama'])
In [38]:
left2
Out[38]:
 OhioNevada
a 1 2
c 3 4
e 5 6
In [39]:
right2
Out[39]:
 MissouriAlabama
b 7 8
c 9 10
d 11 12
e 13 14
In [40]:
'同时使用双方的索引作为合并依据'
pd.merge(left2, right2, how='outer', left_index=True, right_index=True)
Out[40]:
 OhioNevadaMissouriAlabama
a 1 2 NaN NaN
b NaN NaN 7 8
c 3 4 9 10
d NaN NaN 11 12
e 5 6 13 14
In [43]:
'join实例化方法:合并多个带有相同或者相似索引的DataFrame对象,而不管他们之间有没有重叠的列'
'DataFrame的 join 方法在连接键上做左连接'
left2.join(right2, how='outer')
Out[43]:
 OhioNevadaMissouriAlabama
a 1 2 NaN NaN
b NaN NaN 7 8
c 3 4 9 10
d NaN NaN 11 12
e 5 6 13 14
In [42]:
'left1 的 "key" 列 与 right1的索引作为合并键'
left1.join(right1, on='key')
Out[42]:
 keyvaluegroup_val
0 a 0 3.5
1 b 1 7.0
2 a 2 3.5
3 a 3 3.5
4 b 4 7.0
5 c 5 NaN
In [44]:
another = DataFrame([[7., 8.], [9., 10.], [11., 12.], [16., 17.]],
                    index=['a', 'c', 'e', 'f'], columns=['New York', 'Oregon'])
In [45]:
'多组DataFrame的索引合并:(后面会介绍通用的concat函数,它也可以实现这样的功能)'
left2.join([right2, another])
Out[45]:
 OhioNevadaMissouriAlabamaNew YorkOregon
a 1 2 NaN NaN 7 8
c 3 4 9 10 9 10
e 5 6 13 14 11 12
In [46]:
left2.join([right2, another], how='outer')
Out[46]:
 OhioNevadaMissouriAlabamaNew YorkOregon
a 1 2 NaN NaN 7 8
b NaN NaN 7 8 NaN NaN
c 3 4 9 10 9 10
d NaN NaN 11 12 NaN NaN
e 5 6 13 14 11 12
f NaN NaN NaN NaN 16 17
 

轴向连接

In [47]:
arr = np.arange(12).reshape((3, 4))
In [48]:
arr
Out[48]:
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
In [49]:
np.concatenate([arr, arr], axis=1)
Out[49]:
array([[ 0,  1,  2,  3,  0,  1,  2,  3],
       [ 4,  5,  6,  7,  4,  5,  6,  7],
       [ 8,  9, 10, 11,  8,  9, 10, 11]])
In [50]:
s1 = Series([0, 1], index=['a', 'b'])
s2 = Series([2, 3, 4], index=['c', 'd', 'e'])
s3 = Series([5, 6], index=['f', 'g'])
In [51]:
'默认情况下,concat是在axis=0上工作的。'
pd.concat([s1, s2, s3])
Out[51]:
a    0
b    1
c    2
d    3
e    4
f    5
g    6
dtype: int64
In [52]:
'如果传入axis=1,则会生成一个DataFrame'
'这种情况下,axis=1轴没有重叠'
pd.concat([s1, s2, s3], axis=1)
Out[52]:
 012
a 0 NaN NaN
b 1 NaN NaN
c NaN 2 NaN
d NaN 3 NaN
e NaN 4 NaN
f NaN NaN 5
g NaN NaN 6
In [53]:
s4 = pd.concat([s1 * 5, s3])
In [54]:
pd.concat([s1, s4], axis=1)
Out[54]:
 01
a 0 0
b 1 5
f NaN 5
g NaN 6
In [55]:
'join=: 指定连接方式是inner,left,right,还是outer'
pd.concat([s1, s4], axis=1, join='inner')
Out[55]:
 01
a 0 0
b 1 5
In [56]:
'join_axes=: 指定索引作为合并键'
pd.concat([s1, s4], axis=1, join_axes=[['a', 'c', 'b', 'e']])
Out[56]:
 01
a 0 0
c NaN NaN
b 1 5
e NaN NaN
In [58]:
'keys=: 在连接轴上创建层次化索引:'
result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])
In [59]:
result
Out[59]:
one    a    0
       b    1
two    a    0
       b    1
three  f    5
       g    6
dtype: int64
In [60]:
#稍后将详细介绍unstack函数
result.unstack()
Out[60]:
 abfg
one 0 1 NaN NaN
two 0 1 NaN NaN
three NaN NaN 5 6
In [61]:
'沿着axis=1方向对Series进行合并,keys就会成为DataFrame的列头'
pd.concat([s1, s2, s3], axis=1, keys=['one', 'two', 'three'])
Out[61]:
 onetwothree
a 0 NaN NaN
b 1 NaN NaN
c NaN 2 NaN
d NaN 3 NaN
e NaN 4 NaN
f NaN NaN 5
g NaN NaN 6
In [62]:
'axis=1 对DataFrame也是用。 注意:与处理Series的情况不同,此时的 keys= 参数会在连接轴上创建层次化索引'
df1 = DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
                columns=['one', 'two'])
df2 = DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
                columns=['three', 'four'])
pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])
Out[62]:
 level1level2
 onetwothreefour
a 0 1 5 6
b 2 3 NaN NaN
c 4 5 7 8
In [63]:
'如果传入字典而非列表,则字典的 keys 键就会被当成 keys 选项'
pd.concat({'level1': df1, 'level2': df2}, axis=1)
Out[63]:
 level1level2
 onetwothreefour
a 0 1 5 6
b 2 3 NaN NaN
c 4 5 7 8
In [64]:
'names=:对层次化的索引命名'
pd.concat([df1, df2], axis=1, keys=['level1', 'level2'],
          names=['upper', 'lower'])
Out[64]:
upperlevel1level2
loweronetwothreefour
a 0 1 5 6
b 2 3 NaN NaN
c 4 5 7 8
In [3]:
df1 = DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd'])
df2 = DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a'])
In [66]:
df1
Out[66]:
 abcd
0 -0.204708 0.478943 -0.519439 -0.555730
1 1.965781 1.393406 0.092908 0.281746
2 0.769023 1.246435 1.007189 -1.296221
In [67]:
df2
Out[67]:
 bda
0 0.274992 0.228913 1.352917
1 0.886429 -2.001637 -0.371843
In [68]:
'ignore_index=: 不保留连接轴上的索引,产生一组新索引range(total_length)'
pd.concat([df1, df2], ignore_index=True)
Out[68]:
 abcd
0 -0.204708 0.478943 -0.519439 -0.555730
1 1.965781 1.393406 0.092908 0.281746
2 0.769023 1.246435 1.007189 -1.296221
3 1.352917 0.274992 NaN 0.228913
4 -0.371843 0.886429 NaN -2.001637
In [7]:
'对比索引值的情况:'
pd.concat([df1, df2])
Out[7]:
 abcd
0 -0.204708 0.478943 -0.519439 -0.555730
1 1.965781 1.393406 0.092908 0.281746
2 0.769023 1.246435 1.007189 -1.296221
0 1.352917 0.274992 NaN 0.228913
1 -0.371843 0.886429 NaN -2.001637
 

表7-2:函数的参数

objs    参与连接的pandas对象的列表或字典。唯一必需的参数
axis    指明连接的轴向,默认为0
join    'inner','outer'其中之一,默认为‘outer’。指明其他轴向上的索引是按交集(inner)还是并集(outer)进行合并
join_axes  指明用于其他n-1条轴的索引,不执行并集/交集运算
keys    与连接对象有关的值,用于形成连接轴向上的层次化索引。可以是任意值的列表或数组、元组数组、数组列表(如果将levels设置成多级数组的话)
levels   指定用作层次化索引各级别上的索引,如果设置了keys的话
names   用于创建分层级别的名称,如果设置了keys和(或)levels的话
verify_integrity  检查结果对象新轴上的重复情况,如果发现则引发异常。默认(False)允许重复。
ignore_index  不保留连接轴上的索引,产生一组新索引range(total_length)
 

合并重叠数据

In [69]:
a = Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
           index=['f', 'e', 'd', 'c', 'b', 'a'])
b = Series(np.arange(len(a), dtype=np.float64),
           index=['f', 'e', 'd', 'c', 'b', 'a'])
b[-1] = np.nan
In [70]:
a
Out[70]:
f    NaN
e    2.5
d    NaN
c    3.5
b    4.5
a    NaN
dtype: float64
In [71]:
b
Out[71]:
f     0
e     1
d     2
c     3
b     4
a   NaN
dtype: float64
In [72]:
'Numpy的where函数:用于表达一种矢量化的if-else:'
np.where(pd.isnull(a), b, a)
Out[72]:
array([ 0. ,  2.5,  2. ,  3.5,  4.5,  nan])
In [73]:
'Series中有一个combine_first方法,实现一样的功能,而且会对数据对齐'
b[:-2].combine_first(a[2:])
Out[73]:
a    NaN
b    4.5
c    3.0
d    2.0
e    1.0
f    0.0
dtype: float64
In [74]:
'对于DataFrame,combine_first也是一样的,可以这么理解:用参数对象中的数据为调用者对象的缺失数据"打补丁"'
df1 = DataFrame({'a': [1., np.nan, 5., np.nan],
                 'b': [np.nan, 2., np.nan, 6.],
                 'c': range(2, 18, 4)})
df2 = DataFrame({'a': [5., 4., np.nan, 3., 7.],
                 'b': [np.nan, 3., 4., 6., 8.]})
df1.combine_first(df2)
Out[74]:
 abc
0 1 NaN 2
1 4 2 6
2 5 4 10
3 3 6 14
4 7 8 NaN
 

重塑(reshape)与轴向旋转(pivot)

 

重塑层次化索引

stack:将数据的列‘旋转’为行
unstack:将数据的行‘旋转’为列

In [75]:
data = DataFrame(np.arange(6).reshape((2, 3)),
                 index=pd.Index(['Ohio', 'Colorado'], name='state'),
                 columns=pd.Index(['one', 'two', 'three'], name='number'))
data
Out[75]:
numberonetwothree
state   
Ohio 0 1 2
Colorado 3 4 5
In [76]:
'stack举例:默认对最内层进行操作'
result = data.stack()
result
Out[76]:
state     number
Ohio      one       0
          two       1
          three     2
Colorado  one       3
          two       4
          three     5
dtype: int32
In [77]:
'unstack举例:默认对最内层进行操作'
result.unstack()
Out[77]:
numberonetwothree
state   
Ohio 0 1 2
Colorado 3 4 5
In [78]:
'传入分层级别的编号或者名称即可对其他级别进行unstack(stack)操作:'
result.unstack(0)
Out[78]:
stateOhioColorado
number  
one 0 3
two 1 4
three 2 5
In [79]:
result.unstack('state')
Out[79]:
stateOhioColorado
number  
one 0 3
two 1 4
three 2 5
In [80]:
s1 = Series([0, 1, 2, 3], index=['a', 'b', 'c', 'd'])
s2 = Series([4, 5, 6], index=['c', 'd', 'e'])
data2 = pd.concat([s1, s2], keys=['one', 'two'])
data2
Out[80]:
one  a    0
     b    1
     c    2
     d    3
two  c    4
     d    5
     e    6
dtype: int64
In [82]:
'unstack 会引入缺失值'
data2.unstack()
Out[82]:
 abcde
one 0 1 2 3 NaN
two NaN NaN 4 5 6
In [83]:
'stack 会过滤缺失数据,因此unstack,unstack运算可逆'
data2.unstack().stack()
Out[83]:
one  a    0
     b    1
     c    2
     d    3
two  c    4
     d    5
     e    6
dtype: float64
In [84]:
'也可以保留缺失值:传入dropna=False参数'
data2.unstack().stack(dropna=False)
Out[84]:
one  a     0
     b     1
     c     2
     d     3
     e   NaN
two  a   NaN
     b   NaN
     c     4
     d     5
     e     6
dtype: float64
In [85]:
'在对DataFrame进行unstack操作时,作为旋转轴的级别将会成为结果中的最低级别'
df = DataFrame({'left': result, 'right': result + 5},
               columns=pd.Index(['left', 'right'], name='side'))
df
Out[85]:
 sideleftright
statenumber  
Ohioone 0 5
two 1 6
three 2 7
Coloradoone 3 8
two 4 9
three 5 10
In [86]:
df.unstack('state')
Out[86]:
sideleftright
stateOhioColoradoOhioColorado
number    
one 0 3 5 8
two 1 4 6 9
three 2 5 7 10
In [87]:
df.unstack('state').stack('side')
Out[87]:
 stateOhioColorado
numberside  
oneleft 0 3
right 5 8
twoleft 1 4
right 6 9
threeleft 2 5
right 7 10
 

将‘长格式’旋转为‘宽格式’

时间序列数据通常是以所谓的‘长格式’(long)或者‘堆叠格式’(stacked)存在数据库和csv中的:

In [88]:
data = pd.read_csv('ch07/macrodata.csv')
periods = pd.PeriodIndex(year=data.year, quarter=data.quarter, name='date')
data = DataFrame(data.to_records(),
                 columns=pd.Index(['realgdp', 'infl', 'unemp'], name='item'),
                 index=periods.to_timestamp('D', 'end'))

ldata = data.stack().reset_index().rename(columns={0: 'value'})
wdata = ldata.pivot('date', 'item', 'value')
In [89]:
ldata[:10]
Out[89]:
 dateitemvalue
0 1959-03-31 realgdp 2710.349
1 1959-03-31 infl 0.000
2 1959-03-31 unemp 5.800
3 1959-06-30 realgdp 2778.801
4 1959-06-30 infl 2.340
5 1959-06-30 unemp 5.100
6 1959-09-30 realgdp 2775.488
7 1959-09-30 infl 2.740
8 1959-09-30 unemp 5.300
9 1959-12-31 realgdp 2785.204
In [90]:
'pivot:前两个参数的参数名作为行列索引的name,参数名对应值作为行列索引的值,第三个参数的值用来填充DataFrame的数值主体'
pivoted = ldata.pivot('date', 'item', 'value')
pivoted.head()
Out[90]:
iteminflrealgdpunemp
date   
1959-03-31 0.00 2710.349 5.8
1959-06-30 2.34 2778.801 5.1
1959-09-30 2.74 2775.488 5.3
1959-12-31 0.27 2785.204 5.6
1960-03-31 2.31 2847.699 5.2
In [91]:
ldata['value2'] = np.random.randn(len(ldata))
ldata[:10]
Out[91]:
 dateitemvaluevalue2
0 1959-03-31 realgdp 2710.349 1.669025
1 1959-03-31 infl 0.000 -0.438570
2 1959-03-31 unemp 5.800 -0.539741
3 1959-06-30 realgdp 2778.801 0.476985
4 1959-06-30 infl 2.340 3.248944
5 1959-06-30 unemp 5.100 -1.021228
6 1959-09-30 realgdp 2775.488 -0.577087
7 1959-09-30 infl 2.740 0.124121
8 1959-09-30 unemp 5.300 0.302614
9 1959-12-31 realgdp 2785.204 0.523772
In [92]:
'pivot: 忽略最后一个参数,得到的DataFrame就会带有层次化的列'
pivoted = ldata.pivot('date', 'item')
pivoted[:5]
Out[92]:
 valuevalue2
iteminflrealgdpunempinflrealgdpunemp
date      
1959-03-31 0.00 2710.349 5.8 -0.438570 1.669025 -0.539741
1959-06-30 2.34 2778.801 5.1 3.248944 0.476985 -1.021228
1959-09-30 2.74 2775.488 5.3 0.124121 -0.577087 0.302614
1959-12-31 0.27 2785.204 5.6 0.000940 0.523772 1.343810
1960-03-31 2.31 2847.699 5.2 -0.831154 -0.713544 -2.370232
In [93]:
pivoted['value'][:5]
Out[93]:
iteminflrealgdpunemp
date   
1959-03-31 0.00 2710.349 5.8
1959-06-30 2.34 2778.801 5.1
1959-09-30 2.74 2775.488 5.3
1959-12-31 0.27 2785.204 5.6
1960-03-31 2.31 2847.699 5.2
In [94]:
'注意:pivot其实只是一个快捷方式而已,:用set_index创建层次化索引,再用unstack重塑'
unstacked = ldata.set_index(['date', 'item']).unstack('item')
unstacked[:7]
Out[94]:
 valuevalue2
iteminflrealgdpunempinflrealgdpunemp
date      
1959-03-31 0.00 2710.349 5.8 -0.438570 1.669025 -0.539741
1959-06-30 2.34 2778.801 5.1 3.248944 0.476985 -1.021228
1959-09-30 2.74 2775.488 5.3 0.124121 -0.577087 0.302614
1959-12-31 0.27 2785.204 5.6 0.000940 0.523772 1.343810
1960-03-31 2.31 2847.699 5.2 -0.831154 -0.713544 -2.370232
1960-06-30 0.14 2834.390 5.2 -0.860757 -1.860761 0.560145
1960-09-30 2.70 2839.022 5.6 0.119827 -1.265934 -1.063512
 

数据转换

 

移除重复数据

In [95]:
data = DataFrame({'k1': ['one'] * 3 + ['two'] * 4,
                  'k2': [1, 1, 2, 3, 3, 4, 4]})
data
Out[95]:
 k1k2
0 one 1
1 one 1
2 one 2
3 two 3
4 two 3
5 two 4
6 two 4
In [96]:
'duplicated:判断各行是否是重复行,返回布尔型Series'
data.duplicated()
Out[96]:
0    False
1     True
2    False
3    False
4     True
5    False
6     True
dtype: bool
In [97]:
'drop_duplicates:返回去除重复行后的数据'
data.drop_duplicates()
Out[97]:
 k1k2
0 one 1
2 one 2
3 two 3
5 two 4
In [102]:
'两种方法默认都会判断全部列,也可以指定个别的列:传入含有指定列的列表'
data['v1'] = range(7)
data.drop_duplicates(['k1'])
Out[102]:
 k1k2v1
0 one 1 0
3 two 3 3
In [105]:
'(keep="last"): duplicated和drop_duplicates默认保留的是第一个出现的值的组合,传入keep="last",则保留最后一个'
data.drop_duplicates(['k1', 'k2'], keep='last')
Out[105]:
 k1k2v1
1 one 1 1
2 one 2 2
4 two 3 4
6 two 4 6
 

利用函数或者映射进行数据转换

In [108]:
data = DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami',
                           'corned beef', 'Bacon', 'pastrami', 'honey ham',
                           'nova lox'],
                  'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
Out[108]:
 foodounces
0 bacon 4.0
1 pulled pork 3.0
2 bacon 12.0
3 Pastrami 6.0
4 corned beef 7.5
5 Bacon 8.0
6 pastrami 3.0
7 honey ham 5.0
8 nova lox 6.0
In [109]:
'肉类与动物名字的字典映射'
meat_to_animal = {
  'bacon': 'pig',
  'pulled pork': 'pig',
  'pastrami': 'cow',
  'corned beef': 'cow',
  'honey ham': 'pig',
  'nova lox': 'salmon'
}
In [110]:
'Series的map方法可以接受一个函数或含有映射关系的字典型对象,但是这里有一个转换大写到小写的问题:'
data['animal'] = data['food'].map(str.lower).map(meat_to_animal)
data
Out[110]:
 foodouncesanimal
0 bacon 4.0 pig
1 pulled pork 3.0 pig
2 bacon 12.0 pig
3 Pastrami 6.0 cow
4 corned beef 7.5 cow
5 Bacon 8.0 pig
6 pastrami 3.0 cow
7 honey ham 5.0 pig
8 nova lox 6.0 salmon
In [111]:
'也可以传入能够完成全部这些工作的函数'
data['food'].map(lambda x: meat_to_animal[x.lower()])
Out[111]:
0       pig
1       pig
2       pig
3       cow
4       cow
5       pig
6       cow
7       pig
8    salmon
Name: food, dtype: object
 

替换值

利用fillna方法填充缺失数据可以看做值替代的一种特殊情况。
map也可以用来修改对象的数据子集
replace则提供了一种实现该功能的更简单、灵活的方法。

In [112]:
data = Series([1., -999., 2., -999., -1000., 3.])
data
Out[112]:
0       1
1    -999
2       2
3    -999
4   -1000
5       3
dtype: float64
In [113]:
'-999可能是一个缺失值数据的标记值,我们把它替换成pandas能够理解的NA值'
data.replace(-999, np.nan)
Out[113]:
0       1
1     NaN
2       2
3     NaN
4   -1000
5       3
dtype: float64
In [114]:
'传入列表:一次替换多个值'
data.replace([-999, -1000], np.nan)
Out[114]:
0     1
1   NaN
2     2
3   NaN
4   NaN
5     3
dtype: float64
In [115]:
'对不同的值进行不同的替换'
data.replace([-999, -1000], [np.nan, 0])
Out[115]:
0     1
1   NaN
2     2
3   NaN
4     0
5     3
dtype: float64
In [116]:
'也可以传入字典参数'
data.replace({-999: np.nan, -1000: 0})
Out[116]:
0     1
1   NaN
2     2
3   NaN
4     0
5     3
dtype: float64
 

重命名轴索引

In [8]:
data = DataFrame(np.arange(12).reshape((3, 4)),
                 index=['Ohio', 'Colorado', 'New York'],
                 columns=['one', 'two', 'three', 'four'])
data
Out[8]:
 onetwothreefour
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11
In [9]:
'和Series一样,轴标签也有一个map方法:'
data.index.map(str.upper)
Out[9]:
array(['OHIO', 'COLORADO', 'NEW YORK'], dtype=object)
In [10]:
'对DataFrame进行修改:将其赋值给index即可:'
data.index = data.index.map(str.upper)
data
Out[10]:
 onetwothreefour
OHIO 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
In [11]:
'rename:默认先复制DataFrame,后对索引和列标签赋值'
data.rename(index=str.title, columns=str.upper)
Out[11]:
 ONETWOTHREEFOUR
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11
In [12]:
'传递字典:实现部分轴标签的更新'
data.rename(index={'OHIO': 'INDIANA'},
            columns={'three': 'peekaboo'})
Out[12]:
 onetwopeekaboofour
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
In [13]:
'传入inplace=True,原地修改某个数据集'
# 总是返回DataFrame的引用
_ = data.rename(index={'OHIO': 'INDIANA'}, inplace=True)
data
Out[13]:
 onetwothreefour
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
In [14]:
_
Out[14]:
 onetwothreefour
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
 

离散化和面元划分

In [122]:
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
In [130]:
'cut: 将数据拆分为‘面元’'
bins = [18, 25, 35, 60, 100]
cats = pd.cut(ages, bins)
cats
Out[130]:
[(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
Length: 12
Categories (4, object): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]
In [127]:
'codes: 数据在categories中的位置'
cats.codes #原书:cats.labels
Out[127]:
array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)
In [128]:
'categories: 分类名称'
cats.categories #原书: cats.levels
Out[128]:
Index(['(18, 25]', '(25, 35]', '(35, 60]', '(60, 100]'], dtype='object')
In [129]:
pd.value_counts(cats)
Out[129]:
(18, 25]     5
(35, 60]     3
(25, 35]     3
(60, 100]    1
dtype: int64
In [131]:
'right=False: 设置左边为闭端(默认右侧为闭端)'
pd.cut(ages, [18, 26, 36, 61, 100], right=False)
Out[131]:
[[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), ..., [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)]
Length: 12
Categories (4, object): [[18, 26) < [26, 36) < [36, 61) < [61, 100)]
In [132]:
'自定义面元(分类)名称'
group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']
pd.cut(ages, bins, labels=group_names)
Out[132]:
[Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult]
Length: 12
Categories (4, object): [Youth < YoungAdult < MiddleAged < Senior]
In [133]:
'等长面元:传入面元数量: 会根据数据的max和min计算等长面元:'
data = np.random.rand(20)
pd.cut(data, 4, precision=2)
Out[133]:
[(0.45, 0.67], (0.23, 0.45], (0.0037, 0.23], (0.45, 0.67], (0.67, 0.9], ..., (0.67, 0.9], (0.0037, 0.23], (0.0037, 0.23], (0.23, 0.45], (0.23, 0.45]]
Length: 20
Categories (4, object): [(0.0037, 0.23] < (0.23, 0.45] < (0.45, 0.67] < (0.67, 0.9]]
In [134]:
'qcut: 类似于cut,按照样本分位数对数据进行面元划分:(可以得到数量相同的面元分类)'
data = np.random.randn(1000) # 正态分布
cats = pd.qcut(data, 4) # 按4分位数进行切割
cats
Out[134]:
[(-0.022, 0.641], [-3.745, -0.635], (0.641, 3.26], [-3.745, -0.635], (-0.022, 0.641], ..., (-0.022, 0.641], (0.641, 3.26], (-0.635, -0.022], (0.641, 3.26], (-0.635, -0.022]]
Length: 1000
Categories (4, object): [[-3.745, -0.635] < (-0.635, -0.022] < (-0.022, 0.641] < (0.641, 3.26]]
In [135]:
pd.value_counts(cats)
Out[135]:
(0.641, 3.26]       250
(-0.022, 0.641]     250
(-0.635, -0.022]    250
[-3.745, -0.635]    250
dtype: int64
In [136]:
'qcut: 也可以设置自定义分位数:'
pd.qcut(data, [0, 0.1, 0.5, 0.9, 1.])
Out[136]:
[(-0.022, 1.302], (-1.266, -0.022], (-0.022, 1.302], [-3.745, -1.266], (-0.022, 1.302], ..., (-0.022, 1.302], (-0.022, 1.302], (-1.266, -0.022], (-0.022, 1.302], (-1.266, -0.022]]
Length: 1000
Categories (4, object): [[-3.745, -1.266] < (-1.266, -0.022] < (-0.022, 1.302] < (1.302, 3.26]]
 

检测和过滤异常值

异常值的过滤和运算很大程度上就是数组的运算

In [15]:
np.random.seed(12345)
data = DataFrame(np.random.randn(1000, 4))
data.describe()
Out[15]:
 0123
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067684 0.067924 0.025598 -0.002298
std 0.998035 0.992106 1.006835 0.996794
min -3.428254 -3.548824 -3.184377 -3.745356
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.366626 2.653656 3.260383 3.927528
In [16]:
'找出某列中绝对值超过3的值'
col = data[3]
col[np.abs(col) > 3]
Out[16]:
97     3.927528
305   -3.399312
400   -3.745356
Name: 3, dtype: float64
In [23]:
(np.abs(data)>3).head(10)
Out[23]:
0    True
1    True
2    True
3    True
dtype: bool
In [22]:
'找出全部含有‘超过3或-3的值’的行,你可以利用布尔型DataFrame以及any方法:'
(np.abs(data) > 3).any(1).head(10) # any(axis=1)
Out[22]:
0    False
1    False
2    False
3    False
4    False
5     True
6    False
7    False
8    False
9    False
dtype: bool
In [140]:
data[(np.abs(data) > 3).any(1)] 
Out[140]:
 0123
5 -0.539741 0.476985 3.248944 -1.021228
97 -0.774363 0.552936 0.106061 3.927528
102 -0.655054 -0.565230 3.176873 0.959533
305 -2.315555 0.457246 -0.025907 -3.399312
324 0.050188 1.951312 3.260383 0.963301
400 0.146326 0.508391 -0.196713 -3.745356
499 -0.293333 -0.242459 -3.056990 1.918403
523 -3.428254 -0.296336 -0.439938 -0.867165
586 0.275144 1.179227 -3.184377 1.369891
808 -0.362528 -3.548824 1.553205 -2.186301
900 3.366626 -2.372214 0.851010 1.332846
In [26]:
'对特定区域的值赋值'
data[np.abs(data) > 3] = np.sign(data) * 3 # the `sign` function returns ``-1 if x < 0, 0 if x==0, 1 if x > 0``
data.describe() # 注意最大值的变化
Out[26]:
 0123
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067623 0.068473 0.025153 -0.002081
std 0.995485 0.990253 1.003977 0.989736
min -3.000000 -3.000000 -3.000000 -3.000000
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.000000 2.653656 3.000000 3.000000
 

排列和随机采样

利用np.random.permutation函数可以轻松实现对Series或DataFrame的列的排序工作(permuting,随机重排序)

In [144]:
df = DataFrame(np.arange(5 * 4).reshape((5, 4)))
sampler = np.random.permutation(5) #permutation 变更,彻底变化 的意思
sampler
Out[144]:
array([1, 3, 4, 0, 2])
In [145]:
df
Out[145]:
 0123
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19
In [146]:
'df.take(indices): 模拟np.take(df,indices),在指定轴上通过数组获取元素,做法类似 reindex'
df.take(sampler)
Out[146]:
 0123
1 4 5 6 7
3 12 13 14 15
4 16 17 18 19
0 0 1 2 3
2 8 9 10 11
In [148]:
df
Out[148]:
 0123
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19
In [147]:
'如果不想用重复的方式选取随机子集,可以使用permutation:从permutation返回的数组中切下K个元素,(K为期望的子集的大小)'
df.take(np.random.permutation(len(df))[:3])
Out[147]:
 0123
1 4 5 6 7
3 12 13 14 15
0 0 1 2 3
In [149]:
'要通过重复的方式产生样本,最快的方式是通过np.random.randint得到一组随机整数'
'个人理解:应该是这种方式可以产生重复的索引,而上一种则不重复'
bag = np.array([5, 7, -1, 6, 4])
sampler = np.random.randint(0, len(bag), size=10)
In [150]:
sampler
Out[150]:
array([2, 2, 0, 3, 0, 4, 1, 1, 2, 3])
In [151]:
draws = bag.take(sampler)
draws
Out[151]:
array([-1, -1,  5,  6,  5,  4,  7,  7, -1,  6])
In [152]:
bag
Out[152]:
array([ 5,  7, -1,  6,  4])
 

计算指标/哑变量

另一种常用于统计建模或机器学习的转换方式是:将分类标量(categorical variable)转换为
‘哑变量矩阵’(dummy matrix)或‘指标矩阵’(indicator matrix)。
如果某一列中含有K个不同的值,则可以派生出一个K列矩阵或DataFrame(其值全为1或0)。
pandas有一个get_dummies函数可以实现该功能(其实自己动手做一个也不难)。

In [153]:
df = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
                'data1': range(6)})
pd.get_dummies(df['key'])
Out[153]:
 abc
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [157]:
'(prefix=): 有时候,你可能想给指标DataFrame的列加上一个前缀,一遍能够跟其他数据进行合并'
dummies = pd.get_dummies(df['key'], prefix='key')
dummies
Out[157]:
 key_akey_bkey_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [154]:
df_with_dummy = df[['data1']].join(dummies)
df_with_dummy
Out[154]:
 data1key_akey_bkey_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
In [29]:
'如果DataFrame的某行同属于多个分类,则事情就会有点复杂。'
'engine="python",因为c engine 不支持正则表达式(regex)分隔符 sep="::",所以默认会有警告,加上这个参数就好了'
mnames = ['movie_id', 'title', 'genres']
movies = pd.read_table('ch02/movielens/movies.dat', sep='::', header=None,names=mnames,engine='python')
movies[:10]
Out[29]:
 movie_idtitlegenres
0 1 Toy Story (1995) Animation|Children's|Comedy
1 2 Jumanji (1995) Adventure|Children's|Fantasy
2 3 Grumpier Old Men (1995) Comedy|Romance
3 4 Waiting to Exhale (1995) Comedy|Drama
4 5 Father of the Bride Part II (1995) Comedy
5 6 Heat (1995) Action|Crime|Thriller
6 7 Sabrina (1995) Comedy|Romance
7 8 Tom and Huck (1995) Adventure|Children's
8 9 Sudden Death (1995) Action
9 10 GoldenEye (1995) Action|Adventure|Thriller
In [30]:
'要为每个genre添加指标变量就需要做一些数据规整操作。'
'从数据集中抽取不同的genre值'
genre_iter = (set(x.split('|')) for x in movies.genres)
genres = sorted(set.union(*genre_iter))
In [31]:
'构建指标DataFrame:从全零DataFrame开始'
dummies = DataFrame(np.zeros((len(movies), len(genres))), columns=genres)
In [32]:
'迭代分行赋值为1'
for i, gen in enumerate(movies.genres):
    dummies.ix[i, gen.split('|')] = 1
In [36]:
dummies.add_prefix('Genre_').ix[:10,:6]
Out[36]:
 Genre_ActionGenre_AdventureGenre_AnimationGenre_Children'sGenre_ComedyGenre_Crime
0 0 0 1 1 1 0
1 0 1 0 1 0 0
2 0 0 0 0 1 0
3 0 0 0 0 1 0
4 0 0 0 0 1 0
5 1 0 0 0 0 1
6 0 0 0 0 1 0
7 0 1 0 1 0 0
8 1 0 0 0 0 0
9 1 1 0 0 0 0
10 0 0 0 0 1 0
In [37]:
'与movies合并:'
movies_windic = movies.join(dummies.add_prefix('Genre_'))
movies_windic.ix[:10,:6]
Out[37]:
 movie_idtitlegenresGenre_ActionGenre_AdventureGenre_Animation
0 1 Toy Story (1995) Animation|Children's|Comedy 0 0 1
1 2 Jumanji (1995) Adventure|Children's|Fantasy 0 1 0
2 3 Grumpier Old Men (1995) Comedy|Romance 0 0 0
3 4 Waiting to Exhale (1995) Comedy|Drama 0 0 0
4 5 Father of the Bride Part II (1995) Comedy 0 0 0
5 6 Heat (1995) Action|Crime|Thriller 1 0 0
6 7 Sabrina (1995) Comedy|Romance 0 0 0
7 8 Tom and Huck (1995) Adventure|Children's 0 1 0
8 9 Sudden Death (1995) Action 1 0 0
9 10 GoldenEye (1995) Action|Adventure|Thriller 1 1 0
10 11 American President, The (1995) Comedy|Drama|Romance 0 0 0
In [38]:
movies_windic.ix[0]
#注:对于很大的数据,用这种方式构建多成员值表变量就会变得非常慢。肯定需要编写一个能够利用DataFrame内部机制的更低级的函数才行。
Out[38]:
movie_id                                       1
title                           Toy Story (1995)
genres               Animation|Children's|Comedy
Genre_Action                                   0
Genre_Adventure                                0
Genre_Animation                                1
Genre_Children's                               1
Genre_Comedy                                   1
Genre_Crime                                    0
Genre_Documentary                              0
Genre_Drama                                    0
Genre_Fantasy                                  0
Genre_Film-Noir                                0
Genre_Horror                                   0
Genre_Musical                                  0
Genre_Mystery                                  0
Genre_Romance                                  0
Genre_Sci-Fi                                   0
Genre_Thriller                                 0
Genre_War                                      0
Genre_Western                                  0
Name: 0, dtype: object
In [167]:
'一个对统计应用有用的秘诀是:结合get_dummies和诸如cut之类的离散化函数'
np.random.seed(12345)
values = np.random.rand(10)
values
Out[167]:
array([ 0.9296,  0.3164,  0.1839,  0.2046,  0.5677,  0.5955,  0.9645,
        0.6532,  0.7489,  0.6536])
In [168]:
bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
pd.get_dummies(pd.cut(values, bins))
Out[168]:
 (0, 0.2](0.2, 0.4](0.4, 0.6](0.6, 0.8](0.8, 1]
0 0 0 0 0 1
1 0 1 0 0 0
2 1 0 0 0 0
3 0 1 0 0 0
4 0 0 1 0 0
5 0 0 1 0 0
6 0 0 0 0 1
7 0 0 0 1 0
8 0 0 0 1 0
9 0 0 0 1 0
 

字符串操作

 

字符串对象方法(可以略,Python基础中已经学会)

In [169]:
val = 'a,b,  guido'
val.split(',')
Out[169]:
['a', 'b', '  guido']
In [170]:
'split常常与strip结合使用:用于修剪空白符(包括换行符)'
pieces = [x.strip() for x in val.split(',')]
pieces
Out[170]:
['a', 'b', 'guido']
In [171]:
first, second, third = pieces
first + '::' + second + '::' + third
Out[171]:
'a::b::guido'
In [172]:
'比上面更快的方法'
'::'.join(pieces)
Out[172]:
'a::b::guido'
In [ ]:
'guido' in val
In [ ]:
val.index(',')
In [ ]:
val.find(':')
In [ ]:
val.index(':')
In [ ]:
val.count(',')
In [ ]:
val.replace(',', '::')
In [ ]:
val.replace(',', '')
 

正则表达式(还没有开始学习,先跳过)

In [ ]:
import re
text = "foo    bar\t baz  \tqux"
re.split('\s+', text)
In [ ]:
regex = re.compile('\s+')
regex.split(text)
In [ ]:
regex.findall(text)
In [ ]:
text = """Dave dave@google.com
Steve steve@gmail.com
Rob rob@gmail.com
Ryan ryan@yahoo.com
"""
pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}'

# re.IGNORECASE makes the regex case-insensitive
regex = re.compile(pattern, flags=re.IGNORECASE)
In [ ]:
regex.findall(text)
In [ ]:
m = regex.search(text)
m
In [ ]:
text[m.start():m.end()]
In [ ]:
print(regex.match(text))
In [ ]:
print(regex.sub('REDACTED', text))
In [ ]:
pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})'
regex = re.compile(pattern, flags=re.IGNORECASE)
In [ ]:
m = regex.match('wesm@bright.net')
m.groups()
In [ ]:
regex.findall(text)
In [ ]:
print(regex.sub(r'Username: \1, Domain: \2, Suffix: \3', text))
In [ ]:
regex = re.compile(r"""
    (?P<username>[A-Z0-9._%+-]+)
    @
    (?P<domain>[A-Z0-9.-]+)
    \.
    (?P<suffix>[A-Z]{2,4})""", flags=re.IGNORECASE|re.VERBOSE)
In [ ]:
m = regex.match('wesm@bright.net')
m.groupdict()
 

pandas中矢量化的字符串函数(先跳过,学完正则表达式之后再来学习)

In [ ]:
data = {'Dave': 'dave@google.com', 'Steve': 'steve@gmail.com',
        'Rob': 'rob@gmail.com', 'Wes': np.nan}
data = Series(data)
In [ ]:
data
In [ ]:
data.isnull()
In [ ]:
data.str.contains('gmail')
In [ ]:
pattern
In [ ]:
data.str.findall(pattern, flags=re.IGNORECASE)
In [ ]:
matches = data.str.match(pattern, flags=re.IGNORECASE)
matches
In [ ]:
matches.str.get(1)
In [ ]:
matches.str[0]
In [ ]:
data.str[:5]
 

示例: USDA 食品数据库(先跳过)

{ "id": 21441, "description": "KENTUCKY FRIED CHICKEN, Fried Chicken, EXTRA CRISPY, Wing, meat and skin with breading", "tags": ["KFC"], "manufacturer": "Kentucky Fried Chicken", "group": "Fast Foods", "portions": [ { "amount": 1, "unit": "wing, with skin", "grams": 68.0 }, ... ], "nutrients": [ { "value": 20.8, "units": "g", "description": "Protein", "group": "Composition" }, ... ] }
In [ ]:
import json
db = json.load(open('ch07/foods-2011-10-03.json'))
len(db)
In [ ]:
db[0].keys()
In [ ]:
db[0]['nutrients'][0]
In [ ]:
nutrients = DataFrame(db[0]['nutrients'])
nutrients[:7]
In [ ]:
info_keys = ['description', 'group', 'id', 'manufacturer']
info = DataFrame(db, columns=info_keys)
In [ ]:
info[:5]
In [ ]:
info
In [ ]:
pd.value_counts(info.group)[:10]
In [ ]:
nutrients = []

for rec in db:
    fnuts = DataFrame(rec['nutrients'])
    fnuts['id'] = rec['id']
    nutrients.append(fnuts)

nutrients = pd.concat(nutrients, ignore_index=True)
In [ ]:
nutrients
In [ ]:
nutrients.duplicated().sum()
In [ ]:
nutrients = nutrients.drop_duplicates()
In [ ]:
col_mapping = {'description' : 'food',
               'group'       : 'fgroup'}
info = info.rename(columns=col_mapping, copy=False)
info
In [ ]:
col_mapping = {'description' : 'nutrient',
               'group' : 'nutgroup'}
nutrients = nutrients.rename(columns=col_mapping, copy=False)
nutrients
In [ ]:
ndata = pd.merge(nutrients, info, on='id', how='outer')
In [ ]:
ndata
In [ ]:
ndata.ix[30000]
In [ ]:
result = ndata.groupby(['nutrient', 'fgroup'])['value'].quantile(0.5)
result['Zinc, Zn'].order().plot(kind='barh')
In [ ]:
by_nutrient = ndata.groupby(['nutgroup', 'nutrient'])

get_maximum = lambda x: x.xs(x.value.idxmax())
get_minimum = lambda x: x.xs(x.value.idxmin())

max_foods = by_nutrient.apply(get_maximum)[['value', 'food']]

# make the food a little smaller
max_foods.food = max_foods.food.str[:50]
In [ ]:
max_foods.ix['Amino Acids']['food']
posted @ 2017-04-02 18:00  she35  阅读(376)  评论(0)    收藏  举报