## scikit-learn使用笔记与sign prediction简单小结

### S1. 导入数据

 Stock prices    indicator1    indicator2 2.0             123           1252 1.0             ..            .. ..              .             . .

 import numpy as np f = open("filename.txt") f.readline()  # skip the header data = np.loadtxt(f) X = data[:, 1:]  # select columns 1 through end y = data[:, 0]   # select column 0, the stock price

libsvm格式的数据导入：

 >>> from sklearn.datasets import load_svmlight_file >>> X_train, y_train = load_svmlight_file("/path/to/train_dataset.txt") ... >>>X_train.todense()#将稀疏矩阵转化为完整特征矩阵

### S2. Supervised Classification 几种常用方法：

Logistic Regression

 >>> from sklearn.linear_model import LogisticRegression >>> clf2 = LogisticRegression().fit(X, y) >>> clf2 LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) >>> clf2.predict_proba(X_new) array([[  9.07512928e-01,   9.24770379e-02,   1.00343962e-05]])

### Linear SVM (Linear kernel)

 >>> from sklearn.svm import LinearSVC >>> clf = LinearSVC()   >>> clf.fit(X, Y) >>> X_new = [[ 5.0,  3.6,  1.3,  0.25]] >>> clf.predict(X_new)#reuslt[0] if class label array([0], dtype=int32)

SVM (RBF or other kernel)

 >>> from sklearn import svm >>> clf = svm.SVC() >>> clf.fit(X, Y) SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel='rbf', probability=False, shrinking=True, tol=0.001, verbose=False) >>> clf.predict([[2., 2.]]) array([ 1.])

Naive Bayes (Gaussian likelihood)

 from sklearn.naive_bayes import GaussianNB >>> from sklearn import datasets >>> gnb = GaussianNB() >>> gnb = gnb.fit(x, y) >>> gnb.predict(xx)#result[0] is the most likely class label

Decision Tree (classification not regression)

 >>> from sklearn import tree >>> clf = tree.DecisionTreeClassifier() >>> clf = clf.fit(X, Y) >>> clf.predict([[2., 2.]]) array([ 1.])

Ensemble (Random Forests, classification not regression)

 >>> from sklearn.ensemble import RandomForestClassifier >>> clf = RandomForestClassifier(n_estimators=10) >>> clf = clf.fit(X, Y) >>> clf.predict(X_test)

### S3. Model Selection (Cross-validation)

 >>> from sklearn import cross_validation >>> from sklearn import svm >>> clf = svm.SVC(kernel='linear', C=1) >>> scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5)#5-fold cv #change metrics >>> from sklearn import metrics >>> cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5, score_func=metrics.f1_score) #f1 score: http://en.wikipedia.org/wiki/F1_score

Note: if using LR, clf = LogisticRegression().

### S4. Sign Prediction Experiment

Features：网络拓扑feature参考"Predict positive and negative links in online social network"，用户交互信息feature。

### S5.通用测试源代码

posted on 2013-07-05 23:50 百小度治哥 阅读(...) 评论(...) 编辑 收藏

• 随笔 - 82
• 文章 - 0
• 评论 - 7
• 引用 - 0