23学习总结
实验 7 Spark 机器学习库 MLlib 编程实践
一、实验目的
(1)通过实验掌握基本的 MLLib 编程方法; (2)掌握用 MLLib 解决一些常见的数据分析问题,包括数据导入、成分分析和分类和 预测等。
二、实验平台
操作系统:Ubuntu16.04 JDK 版本:1.7 或以上版本 Spark 版本:2.1.0 数据集:下载 Adult 数据集(http://archive.ics.uci.edu/ml/datasets/Adult),该数据集也可以 直接到本教程官网的“下载专区”的“数据集”中下载。数据从美国 1994 年人口普查数据 库抽取而来,可用来预测居民收入是否超过 50K$/year。该数据集类变量为年收入是否超过 50k$,属性变量包含年龄、工种、学历、职业、人种等重要信息,值得一提的是,14 个属 性变量中有 7 个类别型变量。
三、实验内容和要求
1.数据导入 从文件中导入数据,并转化为 DataFrame。
//导入需要的包 import org.apache.spark.ml.feature.PCA import org.apache.spark.sql.Row import org.apache.spark.ml.linalg.{Vector,Vectors} import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator import org.apache.spark.ml.{Pipeline,PipelineModel} import org.apache.spark.ml.feature.{IndexToString, StringIndexer, VectorIndexer,HashingTF, Tokenizer} import org.apache.spark.ml.classification.LogisticRegression import org.apache.spark.ml.classification.LogisticRegressionModel import org.apache.spark.ml.classification.{BinaryLogisticRegressionSummary, LogisticRegression} import org.apache.spark.sql.functions; //获取训练集测试集(需要对测试集进行一下处理,adult.data.txt 的标签是>50K 和<=50K, 而 adult.test.txt 的标签是>50K.和<=50K.,这里是把 adult.test.txt 标签中的“.”去掉了) scala> import spark.implicits._ import spark.implicits._ scala> case class Adult(features: org.apache.spark.ml.linalg.Vector, label: String) defined class Adult scala> val df = sc.textFile("adult.data.txt").map(_.split(",")).map(p => Adult(Vectors.dense(p(0).toDouble,p(2).toDouble,p(4).toDouble, p(10).toDouble, p(11).toDouble, p(12).toDouble), p(14).toString())).toDF() df: org.apache.spark.sql.DataFrame = [features: vector, label: string] scala> val test = sc.textFile("adult.test.txt").map(_.split(",")).map(p => Adult(Vectors.dense(p(0).toDouble,p(2).toDouble,p(4).toDouble, p(10).toDouble, p(11).toDouble, p(12).toDouble), p(14).toString())).toDF() test: org.apache.spark.sql.DataFrame = [features: vector, label: string]
2.进行主成分分析(PCA) 对 6 个连续型的数值型变量进行主成分分析。PCA(主成分分析)是通过正交变换把一 组相关变量的观测值转化成一组线性无关的变量值,即主成分的一种方法。PCA 通过使用 主成分把特征向量投影到低维空间,实现对特征向量的降维。请通过 setK()方法将主成分数 量设置为 3,把连续型的特征向量转化成一个 3 维的主成分。
构建 PCA 模型,并通过训练集进行主成分分解,然后分别应用到训练集和测试集 scala> val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures").setK(3).fit(df) 17/09/07 17:43:04 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS 17/09/07 17:43:04 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS 17/09/07 17:43:04 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK 17/09/07 17:43:04 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK pca: org.apache.spark.ml.feature.PCAModel = pca_22d742dc5c91 scala> val result = pca.transform(df) result: org.apache.spark.sql.DataFrame = [features: vector, label: string ... 1 more field] scala> val testdata = pca.transform(test) testdata: org.apache.spark.sql.DataFrame = [features: vector, label: string ... 1 more field] scala> result.show(false) +------------------------------------+------+-----------------------------------------------------------+ |features |label |pcaFeatures | +------------------------------------+------+-----------------------------------------------------------+ |[39.0,77516.0,13.0,2174.0,0.0,40.0] | <=50K|[77516.0654328193,-2171.6489938846585,-6.9463604765987625] | |[50.0,83311.0,13.0,0.0,0.0,13.0] | <=50K|[83310.99935595776,2.526033892790795,-3.38870240867987] | |[38.0,215646.0,9.0,0.0,0.0,40.0] | <=50K|[215645.99925048646,6.551842584546877,-8.584953969073675] | |[53.0,234721.0,7.0,0.0,0.0,40.0] | <=50K|[234720.99907961802,7.130299808613842,-9.360179790809983] | |[28.0,338409.0,13.0,0.0,0.0,40.0] | <=50K|[338408.9991883054,10.289249842810678,-13.36825187163136] | |[37.0,284582.0,14.0,0.0,0.0,40.0] | <=50K|[284581.9991669545,8.649756033705797,-11.281731333793557] | |[49.0,160187.0,5.0,0.0,0.0,16.0] | <=50K|[160186.99926937037,4.86575372118689,-6.394299355794958] | |[52.0,209642.0,9.0,0.0,0.0,45.0] | >50K |[209641.99910851708,6.366453450443119,-8.38705558572268] | |[31.0,45781.0,14.0,14084.0,0.0,50.0]| >50K |[45781.42721110636,-14082.596953729324,-26.3035091053821] | |[42.0,159449.0,13.0,5178.0,0.0,40.0]| >50K |[159449.15652342222,-5173.151337268416,-15.351831002507415]| |[37.0,280464.0,10.0,0.0,0.0,80.0] | >50K |[280463.9990886109,8.519356755954709,-11.188000533447731] | |[30.0,141297.0,13.0,0.0,0.0,40.0] | >50K |[141296.99942061215,4.2900981666986855,-5.663113262632686] | |[23.0,122272.0,13.0,0.0,0.0,30.0] | <=50K|[122271.9995362372,3.7134109235547164,-4.887549331279983] | |[32.0,205019.0,12.0,0.0,0.0,50.0] | <=50K|[205018.99929839539,6.227844686207229,-8.176186180265503] | |[40.0,121772.0,11.0,0.0,0.0,40.0] | >50K |[121771.99934864056,3.6945287780540603,-4.918583567278704] | |[34.0,245487.0,4.0,0.0,0.0,45.0] | <=50K|[245486.99924622496,7.4601494174606815,-9.75000324288002] | |[25.0,176756.0,9.0,0.0,0.0,35.0] | <=50K|[176755.9994399727,5.370793765347799,-7.029037217537133] | |[32.0,186824.0,9.0,0.0,0.0,40.0] | <=50K|[186823.99934678187,5.675541056422981,-7.445605003141515] | |[38.0,28887.0,7.0,0.0,0.0,50.0] | <=50K|[28886.99946951148,0.8668334219437271,-1.2969921640115318] | |[43.0,292175.0,14.0,0.0,0.0,45.0] | >50K
|[292174.9990868344,8.87932321571431,-11.599483225618247] | +------------------------------------+------+-----------------------------------------------------------+ only showing top 20 rows scala> testdata.show(false) +------------------------------------+-------+-----------------------------------------------------------+ |features |label |pcaFeatures | +------------------------------------+-------+-----------------------------------------------------------+ |[25.0,226802.0,7.0,0.0,0.0,40.0] | <=50K.|[226801.99936708904,6.893313042325555,-8.993983821758796] | |[38.0,89814.0,9.0,0.0,0.0,50.0] | <=50K.|[89813.99938947687,2.7209873244764906,-3.6809508659704675] | |[28.0,336951.0,12.0,0.0,0.0,40.0] | >50K. |[336950.99919122306,10.244920104026273,-13.310695651856003]| |[44.0,160323.0,10.0,7688.0,0.0,40.0]| >50K. |[160323.23272903427,-7683.121090489607,-19.729118648470976]| |[18.0,103497.0,10.0,0.0,0.0,30.0] | <=50K.|[103496.99961293535,3.142862309150963,-4.141563083946321] | |[34.0,198693.0,6.0,0.0,0.0,30.0] | <=50K.|[198692.9993369046,6.03791177465338,-7.894879761309586] | |[29.0,227026.0,9.0,0.0,0.0,40.0] | <=50K.|[227025.99932507655,6.899470708670979,-9.011878890810314] | |[63.0,104626.0,15.0,3103.0,0.0,32.0]| >50K. |[104626.09338764261,-3099.8250060692035,-9.648800672052692]| |[24.0,369667.0,10.0,0.0,0.0,40.0] | <=50K.|[369666.99919110356,11.241251385609905,-14.581104454203475]| |[55.0,104996.0,4.0,0.0,0.0,10.0] | <=50K.|[104995.9992947583,3.186050789405019,-4.236895975019816] | |[65.0,184454.0,9.0,6418.0,0.0,40.0] | >50K. |[184454.1939240066,-6412.391589847388,-18.518448307264528] | |[36.0,212465.0,13.0,0.0,0.0,40.0] | <=50K.|[212464.99927015396,6.455148844458399,-8.458640605561254] | |[26.0,82091.0,9.0,0.0,0.0,39.0] | <=50K.|[82090.999542367,2.489111409624171,-3.335593188553175] | |[58.0,299831.0,9.0,0.0,0.0,35.0] | <=50K.|[299830.9989556855,9.111696151562521,-11.909141441347733] | |[48.0,279724.0,9.0,3103.0,0.0,48.0] | >50K. |[279724.0932834471,-3094.495799296398,-16.491321474159864] | |[43.0,346189.0,14.0,0.0,0.0,50.0] | >50K. |[346188.9990067698,10.522518314317386,-13.720686643182727] | |[20.0,444554.0,10.0,0.0,0.0,25.0] | <=50K.|[444553.9991678726,13.52288689604709,-17.47586621453762] | |[43.0,128354.0,9.0,0.0,0.0,30.0] | <=50K.|[128353.99933456781,3.895809826834201,-5.163630508998832] | |[37.0,60548.0,9.0,0.0,0.0,20.0] | <=50K.|[60547.99950268136,1.834388499828796,-2.482228457083787] | |[40.0,85019.0,16.0,0.0,0.0,45.0] | >50K. |[85018.99937940767,2.5751267063691055,-3.4924978737087193] | +------------------------------------+-------+-----------------------------------------------------------+ only showing top 20 rows
3.训练分类模型并预测居民收入 在主成分分析的基础上,采用逻辑斯蒂回归,或者决策树模型预测居民收入是否超过 50K;对 Test 数据集进行验证。
训练逻辑斯蒂回归模型,并进行测试,得到预测准确率 scala> val labelIndexer = new StringIndexer().setInputCol("label").setOutputCol("indexedLabel").fit(result) labelIndexer: org.apache.spark.ml.feature.StringIndexerModel = strIdx_6721796011c5 scala> labelIndexer.labels.foreach(println) <=50K >50K scala> val featureIndexer = new VectorIndexer().setInputCol("pcaFeatures").setOutputCol("indexedFeatures").fit(result) featureIndexer: org.apache.spark.ml.feature.VectorIndexerModel = vecIdx_7b6672933fc3 scala> println(featureIndexer.numFeatures) 3 scala> val labelConverter = new IndexToString().setInputCol("prediction").setOutputCol("predictedLabel").setLabels(labelIndexer. labels) labelConverter: org.apache.spark.ml.feature.IndexToString = idxToStr_d0c9321aaaa9 scala> val lr = new LogisticRegression().setLabelCol("indexedLabel").setFeaturesCol("indexedFeatures").setMaxIter( 100) lr: org.apache.spark.ml.classification.LogisticRegression = logreg_06812b41b118 scala> val lrPipeline = new Pipeline().setStages(Array(labelIndexer, featureIndexer, lr, labelConverter)) lrPipeline: org.apache.spark.ml.Pipeline = pipeline_b6b87b6e8cd5 scala> val lrPipelineModel = lrPipeline.fit(result) lrPipelineModel: org.apache.spark.ml.PipelineModel = pipeline_b6b87b6e8cd5 scala> val lrModel = lrPipelineModel.stages(2).asInstanceOf[LogisticRegressionModel] lrModel: org.apache.spark.ml.classification.LogisticRegressionModel = logreg_06812b41b118 scala> println("Coefficients: " + lrModel.coefficientMatrix+"Intercept: "+lrModel.interceptVector+"numClasses: "+lrModel.numClasses+"numFeatures: "+lrModel.numFeatures) Coefficients: -1.9828586428133616E-7 -3.5090924715811705E-4 -8.451506276498941E-4 Intercept: [-1.4525982557843347]numClasses: 2numFeatures: 3 scala> val lrPredictions = lrPipelineModel.transform(testdata) lrPredictions: org.apache.spark.sql.DataFrame = [features: vector, label: string ... 7 more fields] scala> val evaluator = new MulticlassClassificationEvaluator().setLabelCol("indexedLabel").setPredictionCol("prediction") evaluator: org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator = mcEval_38ac5c14fa2a scala> val lrAccuracy = evaluator.evaluate(lrPredictions) lrAccuracy: Double = 0.7764235163053484 scala> println("Test Error = " + (1.0 - lrAccuracy)) Test Error = 0.22357648369465155
4.超参数调优 利用 CrossValidator 确定最优的参数,包括最优主成分 PCA 的维数、分类器自身的参数 等。
scala> val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures") pca: org.apache.spark.ml.feature.PCA = pca_b11b53a1002b scala> val labelIndexer = new StringIndexer().setInputCol("label").setOutputCol("indexedLabel").fit(df) labelIndexer: org.apache.spark.ml.feature.StringIndexerModel = strIdx_f2a42d5e19c9 scala> val featureIndexer = new VectorIndexer().setInputCol("pcaFeatures").setOutputCol("indexedFeatures") featureIndexer: org.apache.spark.ml.feature.VectorIndexer = vecIdx_0f9f0344fcfd scala> val labelConverter = new IndexToString().setInputCol("prediction").setOutputCol("predictedLabel").setLabels(labelIndexer.l abels) labelConverter: org.apache.spark.ml.feature.IndexToString = idxToStr_74967420c4ea scala> val lr = new LogisticRegression().setLabelCol("indexedLabel").setFeaturesCol("indexedFeatures").setMaxIter(1 00) lr: org.apache.spark.ml.classification.LogisticRegression = logreg_3a643c15517d scala> val lrPipeline = new Pipeline().setStages(Array(pca, labelIndexer, featureIndexer, lr, labelConverter)) lrPipeline: org.apache.spark.ml.Pipeline = pipeline_4ff414fedeed scala> val paramGrid = new ParamGridBuilder().addGrid(pca.k, Array(1,2,3,4,5,6)).addGrid(lr.elasticNetParam, Array(0.2,0.8)).addGrid(lr.regParam, Array(0.01, 0.1, 0.5)).build() paramGrid: Array[org.apache.spark.ml.param.ParamMap] = Array({ logreg_3a643c15517d-elasticNetParam: 0.2,
pca_b11b53a1002b-k: 1, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002b-k: 2, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002b-k: 3, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002b-k: 4, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002b-k: 5, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002b-k: 6, logreg_3a643c15517d-regParam: 0.01 }, { logreg_3a643c15517d-elasticNetParam: 0.2, pca_b11b53a1002... scala> val cv = new CrossValidator().setEstimator(lrPipeline).setEvaluator(new MulticlassClassificationEvaluator().setLabelCol("indexedLabel").setPredictionCol("prediction")).se tEstimatorParamMaps(paramGrid).setNumFolds(3) cv: org.apache.spark.ml.tuning.CrossValidator = cv_ae1c8fdde36b scala> val cvModel = cv.fit(df) cvModel: org.apache.spark.ml.tuning.CrossValidatorModel = cv_ae1c8fdde36b scala> val lrPredictions=cvModel.transform(test) lrPredictions: org.apache.spark.sql.DataFrame = [features: vector, label: string ... 7 more fields] scala> val evaluator = new MulticlassClassificationEvaluator().setLabelCol("indexedLabel").setPredictionCol("prediction") evaluator: org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator = mcEval_c6a4b78effe0 scala> val lrAccuracy = evaluator.evaluate(lrPredictions) lrAccuracy: Double = 0.7833268290041506 scala> println("准确率为"+lrAccuracy) 准确率为 0.7833268290041506 scala> val bestModel= cvModel.bestModel.asInstanceOf[PipelineModel] bestModel: org.apache.spark.ml.PipelineModel = pipeline_4ff414fedeed scala> val lrModel = bestModel.stages(3).asInstanceOf[LogisticRegressionModel] lrModel: org.apache.spark.ml.classification.LogisticRegressionModel = logreg_3a643c15517d scala> println("Coefficients: " + lrModel.coefficientMatrix + "Intercept: "+lrModel.interceptVector+ "numClasses: "+lrModel.numClasses+"numFeatures: "+lrModel.numFeatures) Coefficients: -1.5003517160303808E-7 -1.6893365468787863E-4 ... (6 total)Intercept: [-7.459195847829245]numClasses: 2numFeatures: 6 scala> val pcaModel = bestModel.stages(0).asInstanceOf[PCAModel] pcaModel: org.apache.spark.ml.feature.PCAModel = pca_b11b53a1002b scala> println("Primary Component: " + pcaModel.pc) Primary Component: -9.905077142269292E-6 -1.435140700776355E-4 ... (6 total) 0.9999999987209459 3.0433787125958012E-5 ... -1.0528384042028638E-6 -4.2722845240104086E-5 ... 3.036788110999389E-5 -0.9999984834627625 ... -3.9138987702868906E-5 0.0017298954619051868 ... -2.1955537150508903E-6 -1.3109584368381985E-4 ... 可以看出,PCA 最优的维数是 6。

浙公网安备 33010602011771号