夜的独白

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

本文是基于《Python数据分析与挖掘实战》的实战部分的第12章的数据——《电子商务网站用户行为分析及服务推荐》做的分析。

由于此章内容很多,因此,分为三个部分进行分享——数据探索(上)、数据预处理(中)、模型构建(下)

_ 本文是继前一篇文章,进行的工作。 本文是“ 数据预处理(中) ” 部分 _

1 数据清洗

1.1 查看各个需要删除的规则包含的信息

    **# 删除规则1:** 统计中间类型网页(带midques_关键字)
[/code]

```code
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    def countmidques(i): 
        j = i[['fullURL','fullURLId','realIP']].copy()
        j['type'] = u'非中间类型网页'
        j['type'][j['fullURL'].str.contains('midques_')]= u'中间类型网页'
        return j['type'].value_counts()
    counts1 = [countmidques(i) for i in sql]
    counts1 = pd.concat(counts1).groupby(level=0).sum()
    counts1
[/code]

![](https://img-
blog.csdn.net/2018021408162470?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

```code
    **# 删除规则2** :主网址去掉无.html点击行为的用户记录
[/code]

```code
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    def countnohtml(i):
        j = i[['fullURL','pageTitle','fullURLId']].copy()
        j['type'] = u'有html页面'
        j['type'][j['fullURL'].str.contains('\.html')==False] = u'无.html点击行为的用户记录'
        
        return j['type'].value_counts()
    counts2 = [countnohtml(i) for i in sql]
    counts2 = pd.concat(counts2).groupby(level=0).sum()
    counts2
[/code]

![](https://img-
blog.csdn.net/20180214081904173?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)  

[/code]

```code
    **# 删除规则3:** 主网址是律师的浏览信息网页(快车-律师助手)、咨询发布成功、快搜免费发布法律
[/code]

```code
    # *备注:此规则中要删除的记录的网址均不含有.html,所以,规则三需要过滤的信息包含了规则2中需要过滤的
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    def countothers(i): 
        j = i[['fullURL','pageTitle','fullURLId']].copy()
        j['type'] = u'其他'   
        j['pageTitle'].fillna(u'空',inplace=True)
        j['type'][j['pageTitle'].str.contains(u'快车-律师助手')]= u'快车-律师助手'
        j['type'][j['pageTitle'].str.contains(u'咨询发布成功')]= u'咨询发布成功'
        j['type'][(j['pageTitle'].str.contains(u'免费发布法律咨询')) | (j['pageTitle'].str.contains(u'法律快搜'))] = u'快搜免费发布法律咨询'
        
        return j['type'].value_counts()
    counts3 = [countothers(i) for i in sql]
    counts3 = pd.concat(counts3).groupby(level=0).sum()
    counts3

_ ![](https://img-
blog.csdn.net/20180214082043376?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA/dissolve/70)
_

    **# 删除规则4:** 去掉网址中问号后面的部分,截取问号前面的部分;去掉主网址不包含关键字
[/code]

```code
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    def deletquesafter(i):
        j = i[['fullURL']].copy()
        j['fullURL'] = j['fullURL'].str.replace('\?.*','')
        j['type'] = u'主网址不包含关键字'
        j['type'][j['fullURL'].str.contains('lawtime')] = u'主网址包含关键字'
        return j
    
    counts4 = [deletquesafter(i) for i in sql]
    counts4 = pd.concat(counts4)
    print len(counts4)
    counts4['type'].value_counts()
[/code]

![](https://img-
blog.csdn.net/20180214082136225?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

```code
    **# 删除规则5:** 重复数据去除
[/code]

```code
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    def countduplicate(i): 
        j = i[['fullURL','timestamp_format','realIP']].copy()
        return j
    
    counts5 = [countduplicate(i) for i in sql]
    counts5 = pd.concat(counts5)
    
    print len(counts5[counts5.duplicated()==True]) #35479
    print len(counts5.drop_duplicates()) #801971
    a = counts5.drop_duplicates()

1.2 Python访问数据库进行清洗操作

1.2.1 第一步,完成删除规则1,2,4

    # 对网址的操作 (只要.html结尾的 & 截取问号左边的值 & 只要包含主网址(lawtime)的&网址中间没有midques_的
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    
    for i in sql:
        d = i[['realIP', 'fullURL','pageTitle','userID','timestamp_format']].copy() # 只要网址列
        d['fullURL'] = d['fullURL'].str.replace('\?.*','') # 网址中问号后面的部分
        d = d[(d['fullURL'].str.contains('\.html')) & (d['fullURL'].str.contains('lawtime')) & (d['fullURL'].str.contains('midques_') == False)] # 只要含有.html的网址
        # 保存到数据库中
        d.to_sql('cleaned_one', engine, index = False, if_exists = 'append')
[/code]

###  1.2.2 第二步,完成删除规则3

```code
    # 对网页标题的操作 (删除 快车-律师助手 & 免费发布法律咨询 & 咨询发布成功 & 法律快搜)
    # 读取数据库数据(基于操作1之后)
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('cleaned_one', engine, chunksize = 10000)
    
    # 对网址的操作 (只要.html结尾的&只要包含主网址(lawtime)的&网址中间没有midques_的
    for i in sql:
        d = i[['realIP','fullURL','pageTitle','userID','timestamp_format']]# 只要网址列
        d['pageTitle'].fillna(u'空',inplace=True)
        d = d[(d['pageTitle'].str.contains(u'快车-律师助手') == False) & (d['pageTitle'].str.contains(u'咨询发布成功') == False) & \
              (d['pageTitle'].str.contains(u'免费发布法律咨询') == False) & (d['pageTitle'].str.contains(u'法律快搜') == False)\
             ].copy()
        # 保存到数据库中
        d.to_sql('cleaned_two', engine, index = False, if_exists = 'append')

注意: 最后发现,对于网页标题需要进行的删除的记录的网址中,均没有.html,因此,操作2可以不必做,操作1已完成工作

1.2.3 第三步,完成删除规则5

    def dropduplicate(i): 
        j = i[['realIP','fullURL','pageTitle','userID','timestamp_format']].copy()
        return j
    
    count6 = [dropduplicate(i) for i in sql]
    count6 = pd.concat(count6)
    print len(count6) # 2012895
    count7 = count6.drop_duplicates(['fullURL','userID','timestamp_format']) # 一定要进行二次删除重复,因为不同的块中会有重复值
    print len(count7) # 647300
    savetosql(count7, 'cleaned_three')
[/code]

##  1.3 查看进行删除操作后的表中的总记录数

```code
    # 查看all_gzdata表中的记录数
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('all_gzdata', engine, chunksize = 10000)
    lens = 0
    for i in sql:
        temp = len(i)
        lens = temp + lens
    print lens # 837450
    
    # 查看cleaned_one表中的记录数
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql1 = pd.read_sql('cleaned_one', engine, chunksize = 10000)
    lens1 = 0
    for i in sql1:
        temp = len(i)
        lens1 = temp + lens1
    print lens1 # 1341930
    
    # 查看cleaned_two表中的记录数
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql2 = pd.read_sql('cleaned_two', engine, chunksize = 10000)
    lens2 = 0
    for i in sql2:
        temp = len(i)
        lens2 = temp + lens2
    print lens2 #2012895
    
    # 查看cleaned_three表中的记录数
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql3 = pd.read_sql('cleaned_three', engine, chunksize = 10000)
    lens3 = 0
    for i in sql3:
        temp = len(i)
        lens3 = temp + lens3
    print lens3 #1294600
[/code]

#  2 数据变换

##  2.1 识别翻页的网址

识别后删除重复(用户ID和处理后的网址相同)的记录

```code
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('cleaned_three', engine, chunksize = 10000)
    
    l0 = 0
    l1 = 0
    l2 = 0
    for i in sql:
        d = i.copy()
        # 获取所有记录的个数
        temp0 = len(d)
        l0 = l0 + temp0
        
        # 获取类似于http://www.lawtime.cn***/2007020619634_2.html格式的记录的个数
        # 匹配1 易知,匹配1一定包含匹配2
        x1 = d[d['fullURL'].str.contains('_\d{0,2}.html')]
        temp1 = len(x1)
        l1 = l1 + temp1    
    
        # 匹配2
        # 获取类似于http://www.lawtime.cn***/29_1_p3.html格式的记录的个数
        x2 = d[d['fullURL'].str.contains('_\d{0,2}_\w{0,2}.html')]
        temp2 = len(x2)
        l2 = l2 + temp2
        
        x1.to_sql('l1', engine, index=False, if_exists = 'append') # 保存
        x2.to_sql('l2', engine, index=False, if_exists = 'append') # 保存
    
    print l0,l1,l2 #1941900 166365 27780
    # 注意:在内部循环中,容易删除不完整,所以需要进行全部读取二次筛选删除
    # 【初步筛选】
    # 读取数据库数据
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('cleaned_three', engine, chunksize = 10000)
    l4 = 0
    for i in sql:
        d = i.copy()
    
        # 注意!!!替换1和替换2的顺序不能颠倒,否则删除不完整
        # 替换1 将类似于http://www.lawtime.cn***/29_1_p3.html下划线后面部分"_1_p3"去掉,规范为标准网址 
        d['fullURL'] = d['fullURL'].str.replace('_\d{0,2}_\w{0,2}.html','.html')#这部分网址有 9260 个
        
        # 替换2 将类似于http://www.lawtime.cn***/2007020619634_2.html下划线后面部分"_2"去掉,规范为标准网址
        d['fullURL'] = d['fullURL'].str.replace('_\d{0,2}.html','.html') #这部分网址有 55455-9260 = 46195 个
        
        d = d.drop_duplicates(['fullURL','userID']) # 删除重复记录(删除有相同网址和相同用户ID的)【不完整】因为不同的数据块中依然有重复数据
        temp = len(d)
        l4 = l4 + temp
        d.to_sql('changed_1', engine, index=False, if_exists = 'append') # 保存
    
    print l4 # 1643197
[/code]

```code
    # 【二次筛选】
    # 删除重复记录
    # 读取数据库数据(基于操作2之后)
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('changed_1', engine, chunksize = 10000)
    
    def dropduplicate(i): 
        j = i[['realIP','fullURL','pageTitle','userID','timestamp_format']].copy()
        return j
    
    counts1 = [dropduplicate(i) for i in sql]
    counts1 = pd.concat(counts1)
    print len(counts1)# 1095216
    a = counts1.drop_duplicates(['fullURL','userID'])
    print len(a)# 528166
    savetosql(a, 'changed_2')
[/code]

```code
    # 查看经过数据变换替换后的数据是否替换干净
    # 读取数据库数据 
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('changed_2', engine, chunksize = 10000)
    
    l0 = 0
    l1 = 0
    l2 = 0
    for i in sql:
        d = i.copy()
        # 获取所有记录的个数
        temp0 = len(d)
        l0 = l0 + temp0
        
        # 获取类似于http://www.lawtime.cn***/2007020619634_2.html格式的记录的个数
        # 匹配1 易知,匹配1一定包含匹配2
        x1 = d[d['fullURL'].str.contains('_\d{0,2}.html')]
        temp1 = len(x1)
        l1 = l1 + temp1    
    
        # 匹配2
        # 获取类似于http://www.lawtime.cn***/29_1_p3.html格式的记录的个数
        x2 = d[d['fullURL'].str.contains('_\d{0,2}_\w{0,2}.html')]
        temp2 = len(x2)
        l2 = l2 + temp2
    
    print l0,l1,l2# 528166 0 0表示已经删除成功
[/code]

##  2.2 网址正确分类

手动分析咨询类别和知识类别的网址  

```code
    # 读取数据库数据 
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('changed_2', engine, chunksize = 10000)
    
    def countzhishi(i):
        j = i[['fullURL']].copy()
        j['type'] = 'else'
        j['type'][j['fullURL'].str.contains('(info)|(faguizt)')] = 'zhishi'
        j['type'][j['fullURL'].str.contains('(ask)|(askzt)')] = 'zixun'
        
        return j
    counts2 = [countzhishi(i) for i in sql]
    counts2 = pd.concat(counts2)
    counts2['type'].value_counts()
    # 统计各个类别占比
    a = counts2['type'].value_counts()
    b = DataFrame(a)
    b.columns = ['num']
    b.index.name = 'type'
    b['per'] = b['num']/b['num'].sum()*100
    b
[/code]

![](https://img-
blog.csdn.net/20180214083711155?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

###  2.2.1 第一步 *: 手动分析知识类别的网址,得出知识类别下的二级类别有哪些

```code
    c = counts2[counts2['type']=='zhishi']
    
    d = c[c['fullURL'].str.contains('info')]
    print len(d) # 102140
    d['iszsk'] = 'else' # 结果显示是空  
    d['iszsk'][d['fullURL'].str.contains('info')] = 'infoelsezsk' # 102032
    d['iszsk'][d['fullURL'].str.contains('zhishiku')] = 'zsk' # 108
    d['iszsk'].value_counts()  
    # 由结果可知,除了‘info'和’zhishifku'没有其他类型,且 【info类型(不包含zhishiku):infoelsezsk】和【包含zhishiku:zsk】类型无相交的部分。
    # 因此分析知识类别下的二级类型时,需要分两部分考虑,求每部分的类别,再求并集,即为所有二级类型
[/code]

![](https://img-
blog.csdn.net/20180214083828902?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

###  2.2.2 第二步 *用正则表达式匹配出网址中二级类别

```code
    # 方法:用上面已经处理的'iszsk'列分成两种类别的网址,分别使用正则表达式进行匹配
    # 缺点:太慢了!!!!!!!!!!!!!!
    import re
    # 对于http://www.lawtime.cn/info/jiaotong/jtsgcl/2011070996791.html类型的网址进行这样匹配,获取二级类别名称"jiaotong"
    pattern = re.compile('/info/(.*?)/',re.S)
    e = d[d['iszsk'] == 'infoelsezsk']
    for i in range(len(e)):
        e.iloc[i,2] = re.findall(pattern, e.iloc[i,0])[0]
    print e.head()
    
    # 对于http://www.lawtime.cn/zhishiku/laodong/info/***.html类型的网址进行这样匹配,获取二级类别名称"laodong"
    # 由于还有一类是http://www.lawtime.cn/zhishiku/laodong/***.html,所以使用'zhishiku/(.*?)/'进行匹配
    pattern1 = re.compile('zhishiku/(.*?)/',re.S)
    f = d[d['iszsk'] == 'zsk']
    for i in range(len(f)):
    #     print i 
        f.iloc[i,2] = re.findall(pattern1, f.iloc[i,0])[0]
    print f.head()
[/code]

###  2.2.3 第三步 *将列名重命名

```code
    e.columns = ['fullURL', 'type1', 'type2']
    print e.head()
    
    f.columns = ['fullURL', 'type1', 'type2']
    print f.head()
    
    # 将两类处理过二级类别的记录合并,求二级类别的交集
    g = pd.concat([e,f])
    h = g['type2'].value_counts()
    
    # 求两类网址中的二级类别数,由结果可知,两类网址的二级类别的集合的并集满足所需条件
    len(e['type2'].value_counts()) # 66
    len(f['type2'].value_counts()) # 31
    len(g['type2'].value_counts()) # 69
    
    print h.head()
    
    print h.index # 列出知识类别下的所有的二级类别
[/code]

###  2.2.4 第四步 *将二级类别分别存储到数据库中

```code
     detailtypes = h.index
     for i in range(len(detailtypes)):
         x = g[g['type2'] == h.index[i]]
         savetosql(x,h.index[i])
    

2.2.5 第五步 *用正则表达式匹配出网址中三级类别

    # 复制e的备份进行处理,避免操作中改变了数据
    q = e.copy()
    q['type3'] = np.nan
    resultype3 = DataFrame([],columns=q.columns)
    for i in range(len(h.index)):
        pattern2 = re.compile('/info/'+h.index[i]+'/(.*?)/',re.S)
        current = q[q['type2'] == h.index[i]]
        print current.head()
        for j in range(len(current)):
            findresult = re.findall(pattern2, current.iloc[j,0])
            if findresult == []: # 若匹配结果是空,则将空值进行赋值给三级类别
                current.iloc[j,3] = np.nan
            else:
                current.iloc[j,3] = findresult[0]
        resultype3 = pd.concat([resultype3,current])# 将处理后的数据拼接
    resultype3.head()
[/code]

```code
    resultype3.set_index('fullURL',inplace=True)
    resultype3.head(10)
[/code]

![](https://img-
blog.csdn.net/20180214084332544?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

```code
    # 统计婚姻类下面的三级类别的数目
    j = resultype3[resultype3['type2'] == 'hunyin']['type3'].value_counts()
    print len(j) # 145
    j.head()
[/code]

![](https://img-
blog.csdn.net/20180214084407715?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)

###  2.2.6 第六步 *目标:将类别3按照每类降序排列,然后保存

```code
    # 方式1
    Type3nums = resultype3.pivot_table(index = ['type2','type3'], aggfunc = 'count')
    # 方式2: Type3nums = resultype3.groupby([resultype3['type2'],resultype3['type3']]).count()
    r = Type3nums.reset_index().sort_values(by=['type2','type1'],ascending=[True,False])
    r.set_index(['type2','type3'],inplace = True)
    #保存的表名命名格式为“2_2_k此表功能名称”,此表表示生成的第1张表格,功能为Type3nums:得出所有三级类别
    r.to_excel('2_2_3Type3nums.xlsx')
    r
[/code]

![](https://img-
blog.csdn.net/20180214084535930?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMjA2Mzc3Mw==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)  

###  2.2.7 属性规约

获取后续建模需要的数据

咨询(ask)和婚姻(hunyin)数据

```code
    # 将满足需求的存到数据库中去
    # 方法一:
    # 读取数据库数据 
    engine = create_engine('mysql+pymysql://root:@127.0.0.1:3306/jing?charset=utf8')
    sql = pd.read_sql('changed_2', engine, chunksize = 10000)
    l1 = 0
    l2 = 0 
    for i in sql:
        zixun = i[['userID','fullURL']][i['fullURL'].str.contains('(ask)|(askzt)')].copy()
        l1 = len(zixun) + l1
        hunyin = i[['userID','fullURL']][i['fullURL'].str.contains('hunyin')].copy()    
        l2 = len(hunyin) + l2
        zixun.to_sql('zixunformodel', engine, index=False,if_exists = 'append')
        hunyin.to_sql('hunyinformodel', engine, index=False,if_exists = 'append')
    print l1,l2 # 393185 16982
    
    
    # 方法二:
    m = counts2[counts2['type'] == 'zixun']
    n =counts2[counts2['fullURL'].str.contains('hunyin')]
    p = m[m['fullURL'].str.contains('hunyin')]
    p # 结果为空,可知,包含zixun的页面中不包含hunyin,两者没有交集
    savetosql(m,'zixun')
    savetosql(n,'hunyin')
[/code]

#  备注:本章完整代码请见 [ 点击打开链接
](https://github.com/clover95/DataAnalysisbyPython/tree/master/chapter12)


![在这里插入图片描述](https://img-blog.csdnimg.cn/20210608151750993.gif)
posted on 2021-07-01 19:26  夜的独白  阅读(118)  评论(0)    收藏  举报