# 数据预处理—独热编码

## 问题引入

["male", "female"]

["from Europe", "from US", "from Asia"]

["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]

["male", "from US", "uses Internet Explorer"] 表示为[0, 1, 3]

["female", "from Asia", "uses Chrome"]表示为[1, 2, 1]

1、Why do we binarize categorical features?
We binarize the categorical input so that they can be thought of as a vector from the Euclidean space (we call this as embedding the vector in the Euclidean space).使用one-hot编码，将离散特征的取值扩展到了欧式空间，离散特征的某个取值就对应欧式空间的某个点。

2、Why do we embed the feature vectors in the Euclidean space?
Because many algorithms for classification/regression/clustering etc. requires computing distances between features or similarities between features. And many definitions of distances and similarities are defined over features in Euclidean space. So, we would like our features to lie in the Euclidean space as well.

3、Why does embedding the feature vector in Euclidean space require us to binarize categorical features?
Let us take an example of a dataset with just one feature (say job_type as per your example) and let us say it takes three values 1,2,3.
Now, let us take three feature vectors x_1 = (1), x_2 = (2), x_3 = (3). What is the euclidean distance between x_1 and x_2, x_2 and x_3 & x_1 and x_3? d(x_1, x_2) = 1, d(x_2, x_3) = 1, d(x_1, x_3) = 2. This shows that distance between job type 1 and job type 2 is smaller than job type 1 and job type 3. Does this make sense? Can we even rationally define a proper distance between different job types? In many cases of categorical features, we can properly define distance between different values that the categorical feature takes. In such cases, isn't it fair to assume that all categorical features are equally far away from each other?
Now, let us see what happens when we binary the same feature vectors. Then, x_1 = (1, 0, 0), x_2 = (0, 1, 0), x_3 = (0, 0, 1). Now, what are the distances between them? They are sqrt(2). So, essentially, when we binarize the input, we implicitly state that all values of the categorical features are equally away from each other.

Note that our reason for why binarize the categorical features is independent of the number of the values the categorical features take, so yes, even if the categorical feature takes 1000 values, we still would prefer to do binarization.

5、Are there cases when we can avoid doing binarization?

Yes. As we figured out earlier, the reason we binarize is because we want some meaningful distance relationship between the different values. As long as there is some meaningful distance relationship, we can avoid binarizing the categorical feature. For example, if you are building a classifier to classify a webpage as important entity page (a page important to a particular entity) or not and let us say that you have the rank of the webpage in the search result for that entity as a feature, then 1] note that the rank feature is categorical, 2] rank 1 and rank 2 are clearly closer to each other than rank 1 and rank 3, so the rank feature defines a meaningful distance relationship and so, in this case, we don't have to binarize the categorical rank feature.
More generally, if you can cluster the categorical values into disjoint subsets such that the subsets have meaningful distance relationship amongst them, then you don't have binarize fully, instead you can split them only over these clusters. For example, if there is a categorical feature with 1000 values, but you can split these 1000 values into 2 groups of 400 and 600 (say) and within each group, the values have meaningful distance relationship, then instead of fully binarizing, you can just add 2 features, one for each cluster and that should be fine.

It depends on your ML algorithms, some methods requires almost no efforts to normalize features or handle both continuous and discrete features, like tree based methods: c4.5, Cart, random Forrest, bagging or boosting. But most of parametric models (generalized linear models, neural network, SVM,etc) or methods using distance metrics (KNN, kernels, etc) will require careful work to achieve good results. Standard approaches including binary all features, 0 mean unit variance all continuous features, etc。
基于树的方法是不需要进行特征的归一化，例如随机森林，bagging 和 boosting等。基于参数的模型或基于距离的模型，都是要进行特征的归一化。

## Tree Model不太需要one-hot编码

tree-model是在动态的过程中生成类似 One-Hot + Feature Crossing 的机制
1. 一个特征或者多个特征最终转换成一个叶子节点作为编码 ，one-hot可以理解成三个独立事件
2. 决策树是没有特征大小的概念的，只有特征处于他分布的哪一部分的概念

## 独热编码

自然状态码为：000,001,010,011,100,101

1. 决了分类器不好处理属性数据的问题

2. 一定程度上也起到了扩充特征的作用

## 实际运用

kaggle中tianic问题中： 登陆的地点有三个，在数据中分别用 S，C，Q表示。

def dataPreprocess(df):
df.loc[df['Sex'] == 'male', 'Sex'] = 0
df.loc[df['Sex'] == 'female', 'Sex'] = 1

# 由于 Embarked中有两个数据未填充，需要先将数据填满
df['Embarked'] = df['Embarked'].fillna('S')
# 部分年龄数据未空， 填充为 均值
df['Age'] = df['Age'].fillna(df['Age'].median())


df.loc[df['Embarked']=='S', 'Embarked'] = 0 df.loc[df['Embarked'] == 'C', 'Embarked'] = 1 df.loc[df['Embarked'] == 'Q', 'Embarked'] = 2
    df['NewFare'] = df['Fare']
df.loc[(df.Fare < 40), 'NewFare'] = 0
df.loc[((df.Fare >= 40) & (df.Fare < 100)), 'NewFare'] = 1
df.loc[((df.Fare >= 100) & (df.Fare < 150)), 'NewFare'] = 2
df.loc[((df.Fare >= 150) & (df.Fare < 200)), 'NewFare'] = 3
df.loc[(df.Fare >= 200), 'NewFare'] = 4
return  df

def data_process_onehot(df):
#copy_df = df.copy()
train_Embarked = df["Embarked"].values.reshape(-1,1)

onehot_encoder = OneHotEncoder(sparse=False)
train_OneHotEncoded = onehot_encoder.fit_transform(train_Embarked)
df["EmbarkedS"] = train_OneHotEncoded[:, 0]
df["EmbarkedC"] = train_OneHotEncoded[:, 1]
df["EmbarkedQ"] = train_OneHotEncoded[:, 2]
return df

data_train = ReadData.readSourceData()
data_train = dataPreprocess(data_train)
data_train = data_process_onehot(data_train)
precent = linearRegression(data_train) 

https://blog.csdn.net/wl_ss/article/details/78508367

posted @ 2018-12-09 11:59 NeilZhang 阅读(...) 评论(...) 编辑 收藏