本文从预测婴儿生存几率这个例子出发,介绍PySpark中ML包在数据加载、数据转换、特征提取、机器学习算法等方面的功能。并介绍了逻辑回归、聚类、自然语言处理、主题提取等方面的应用。
预测婴儿生存几率
数据加载
1 | import pyspark.sql.types as typ |
数据转换
数据类型转换
1 | import pyspark.ml.feature as ft |
one-hot编码
1 | encoder = ft.OneHotEncoder( |
将所有特征合并成一列
1 | featuresCreator = ft.VectorAssembler( |
建立评估模型
逻辑回归模型
1 | import pyspark.ml.classification as cl |
创建管道
1 | from pyspark.ml import Pipeline |
模型拟合
数据分割
1 | births_train, births_test = births \ |
训练过程
1 | model = pipeline.fit(births_train) |
1 | 测试过程 |
1 | test_model.take(1) |
[Row(INFANT_ALIVE_AT_REPORT=0, BIRTH_PLACE='1', MOTHER_AGE_YEARS=13, FATHER_COMBINED_AGE=99, CIG_BEFORE=0, CIG_1_TRI=0, CIG_2_TRI=0, CIG_3_TRI=0, MOTHER_HEIGHT_IN=66, MOTHER_PRE_WEIGHT=133, MOTHER_DELIVERY_WEIGHT=135, MOTHER_WEIGHT_GAIN=2, DIABETES_PRE=0, DIABETES_GEST=0, HYP_TENS_PRE=0, HYP_TENS_GEST=0, PREV_BIRTH_PRETERM=0, BIRTH_PLACE_INT=1, BIRTH_PLACE_VEC=SparseVector(9, {1: 1.0}), features=SparseVector(24, {0: 13.0, 1: 99.0, 6: 66.0, 7: 133.0, 8: 135.0, 9: 2.0, 16: 1.0}), rawPrediction=DenseVector([1.0573, -1.0573]), probability=DenseVector([0.7422, 0.2578]), prediction=0.0)]
模型评估
1 | import pyspark.ml.evaluation as ev |
0.7401301847095617
0.7139354342365674
保存模型
保存管道定义
1 | pipelinePath = './infant_oneHotEncoder_Logistic_Pipeline' |
1 | loadedPipeline = Pipeline.load(pipelinePath) |
[Row(INFANT_ALIVE_AT_REPORT=0, BIRTH_PLACE='1', MOTHER_AGE_YEARS=13, FATHER_COMBINED_AGE=99, CIG_BEFORE=0, CIG_1_TRI=0, CIG_2_TRI=0, CIG_3_TRI=0, MOTHER_HEIGHT_IN=66, MOTHER_PRE_WEIGHT=133, MOTHER_DELIVERY_WEIGHT=135, MOTHER_WEIGHT_GAIN=2, DIABETES_PRE=0, DIABETES_GEST=0, HYP_TENS_PRE=0, HYP_TENS_GEST=0, PREV_BIRTH_PRETERM=0, BIRTH_PLACE_INT=1, BIRTH_PLACE_VEC=SparseVector(9, {1: 1.0}), features=SparseVector(24, {0: 13.0, 1: 99.0, 6: 66.0, 7: 133.0, 8: 135.0, 9: 2.0, 16: 1.0}), rawPrediction=DenseVector([1.0573, -1.0573]), probability=DenseVector([0.7422, 0.2578]), prediction=0.0)]
保存模型
1 | from pyspark.ml import PipelineModel |
[Row(INFANT_ALIVE_AT_REPORT=0, BIRTH_PLACE='1', MOTHER_AGE_YEARS=13, FATHER_COMBINED_AGE=99, CIG_BEFORE=0, CIG_1_TRI=0, CIG_2_TRI=0, CIG_3_TRI=0, MOTHER_HEIGHT_IN=66, MOTHER_PRE_WEIGHT=133, MOTHER_DELIVERY_WEIGHT=135, MOTHER_WEIGHT_GAIN=2, DIABETES_PRE=0, DIABETES_GEST=0, HYP_TENS_PRE=0, HYP_TENS_GEST=0, PREV_BIRTH_PRETERM=0, BIRTH_PLACE_INT=1, BIRTH_PLACE_VEC=SparseVector(9, {1: 1.0}), features=SparseVector(24, {0: 13.0, 1: 99.0, 6: 66.0, 7: 133.0, 8: 135.0, 9: 2.0, 16: 1.0}), rawPrediction=DenseVector([1.0573, -1.0573]), probability=DenseVector([0.7422, 0.2578]), prediction=0.0)]
超参调优
网格搜索
第一步,定义搜索范围
1 | import pyspark.ml.tuning as tune |
第二步,建立评估模型
1 | evaluator = ev.BinaryClassificationEvaluator( |
第三步,设置验证参数
1 | cv = tune.CrossValidator( |
1 | 第四步,建立转换管道 |
1 | pipeline = Pipeline(stages=[encoder,featuresCreator]) |
第五步,寻找最优超参集
1 | cvModel = cv.fit(data_transformer.transform(births_train)) |
1 | data_train = data_transformer \ |
0.7404959803309813
0.7157971108486731
提取对应的超参
1 | results = [ |
([{'maxIter': 50}, {'regParam': 0.01}], 0.7386350804981119)
训练-验证集划分
使用卡方验证法选择前五个特征
1 | selector = ft.ChiSqSelector( |
1 | tvs = tune.TrainValidationSplit( |
1 | tvsModel = tvs.fit( |
0.7294296314442145
0.703775950281647
从上面的结果可以看出,使用较少的特征,模型表现差一些,但是并不明显。
PySpark ML的其他功能
NLP相关的特征提取
以一个简单的数据集为例
1 | text_data = spark.createDataFrame([ |
将文本划分为单词,并删除符号,改成小写。
1 | tokenizer = ft.RegexTokenizer( |
[Row(input_arr=['machine', 'learning', 'can', 'be', 'applied', 'to', 'a', 'wide', 'variety', 'of', 'data', 'types', 'such', 'as', 'vectors', 'text', 'images', 'and', 'structured', 'data', 'this', 'api', 'adopts', 'the', 'dataframe', 'from', 'spark', 'sql', 'in', 'order', 'to', 'support', 'a', 'variety', 'of', 'data', 'types'])]
删除无用的单词,如a,be等
1 | stopwords = ft.StopWordsRemover( |
[Row(input_stop=['machine', 'learning', 'applied', 'wide', 'variety', 'data', 'types', 'vectors', 'text', 'images', 'structured', 'data', 'api', 'adopts', 'dataframe', 'spark', 'sql', 'order', 'support', 'variety', 'data', 'types'])]
构建NGram模型和管道
1 | ngram = ft.NGram(n=2, |
[Row(nGrams=['machine learning', 'learning applied', 'applied wide', 'wide variety', 'variety data', 'data types', 'types vectors', 'vectors text', 'text images', 'images structured', 'structured data', 'data api', 'api adopts', 'adopts dataframe', 'dataframe spark', 'spark sql', 'sql order', 'order support', 'support variety', 'variety data', 'data types'])]
离散化连续变量
创建一个简单的数据集
1 | import numpy as np |
将连续变量离散化为5个类别
1 | discretizer = ft.QuantileDiscretizer( |
查看每个类别的平均值
1 | data_discretized = discretizer.fit(data).transform(data) |
[Row(discretized=0.0, avg(continuous_var)=12.314360733007915),
Row(discretized=1.0, avg(continuous_var)=16.046244793347466),
Row(discretized=2.0, avg(continuous_var)=20.25079947835259),
Row(discretized=3.0, avg(continuous_var)=22.040988218437327),
Row(discretized=4.0, avg(continuous_var)=24.264824657002865)]
标准化连续特征
1 | vectorizer = ft.VectorAssembler( |
1 | normalizer = ft.StandardScaler( |
随机森林进行分类
本节采用随机森林算法预测婴儿存活率
1 | import pyspark.sql.functions as func |
1 | classifier = cl.RandomForestClassifier( |
1 | evaluator = ev.BinaryClassificationEvaluator( |
0.7625231306933616
0.7474287997552782
使用一棵树进行分类
1 | classifier = cl.DecisionTreeClassifier( |
0.7582781726635287
0.7787580540118526
聚类
簇查找
使用k-means方法在数据集中查找数据相似性
1 | import pyspark.ml.clustering as clus |
1 | test = model.transform(births_test) |
[Row(prediction=1, avg(MOTHER_HEIGHT_IN)=83.91154791154791, count(1)=407),
Row(prediction=3, avg(MOTHER_HEIGHT_IN)=66.64658634538152, count(1)=249),
Row(prediction=4, avg(MOTHER_HEIGHT_IN)=64.31597357170618, count(1)=10292),
Row(prediction=2, avg(MOTHER_HEIGHT_IN)=67.69473684210526, count(1)=475),
Row(prediction=0, avg(MOTHER_HEIGHT_IN)=64.43472584856397, count(1)=2298)]
主题挖掘
一个简单的例子
1 | text_data = spark.createDataFrame([ |
和前面NLP的例子类似,首先对文本进行处理
1 | tokenizer = ft.RegexTokenizer( |
1 | stringIndexer = ft.CountVectorizer( |
[Row(input_indexed=SparseVector(257, {2: 7.0, 6: 1.0, 7: 3.0, 10: 3.0, 11: 3.0, 19: 1.0, 27: 1.0, 31: 1.0, 32: 2.0, 35: 2.0, 40: 1.0, 51: 1.0, 56: 1.0, 65: 1.0, 66: 1.0, 72: 1.0, 74: 1.0, 77: 1.0, 81: 1.0, 83: 1.0, 96: 1.0, 106: 1.0, 111: 1.0, 123: 1.0, 128: 1.0, 163: 1.0, 173: 1.0, 204: 1.0, 206: 1.0, 210: 1.0, 250: 1.0, 253: 1.0, 256: 1.0})),
Row(input_indexed=SparseVector(257, {18: 2.0, 19: 1.0, 22: 1.0, 28: 2.0, 30: 2.0, 38: 2.0, 45: 1.0, 46: 1.0, 48: 1.0, 50: 1.0, 59: 1.0, 60: 1.0, 62: 1.0, 68: 1.0, 76: 1.0, 92: 1.0, 100: 1.0, 103: 1.0, 107: 1.0, 108: 1.0, 110: 1.0, 113: 1.0, 121: 1.0, 126: 1.0, 131: 1.0, 140: 1.0, 145: 1.0, 146: 1.0, 147: 1.0, 150: 1.0, 151: 1.0, 160: 1.0, 178: 1.0, 179: 1.0, 186: 1.0, 187: 1.0, 191: 1.0, 193: 1.0, 198: 1.0, 199: 1.0, 202: 1.0, 226: 1.0, 232: 1.0, 240: 1.0, 243: 1.0, 247: 1.0, 252: 1.0}))]
采用LDA(Latent Dirichlet Allocation)模型提取主题
1 | clustering = clus.LDA(k=2, optimizer='online', featuresCol=stringIndexer.getOutputCol()) |
1 | pipeline = Pipeline(stages=[ |
1 | topics = pipeline \ |
[Row(topicDistribution=DenseVector([0.2357, 0.7643])),
Row(topicDistribution=DenseVector([0.0362, 0.9638])),
Row(topicDistribution=DenseVector([0.986, 0.014])),
Row(topicDistribution=DenseVector([0.039, 0.961])),
Row(topicDistribution=DenseVector([0.3513, 0.6487])),
Row(topicDistribution=DenseVector([0.9715, 0.0285]))]
回归模型
选择一些特征预测属性MOTHER_WEIGHT_GAIN
1 | features = ['MOTHER_AGE_YEARS','MOTHER_HEIGHT_IN', |
将所有的属性合并为一列,然后优选最重要的6个特征
1 | featuresCreator = ft.VectorAssembler( |
采用梯度提升决策树预测增加的体重
1 | import pyspark.ml.regression as reg |
1 | pipeline = Pipeline(stages=[ |
1 | evaluator = ev.RegressionEvaluator( |
0.48862170400240335
可以看出,这个模型表行并不好,应该是和输入特征有关,如果没有更好的输入特征,该模型很难有更好的表现。
小结
- spark+ML使用转换器、评估器以及它们在管道中的作用
- 本文涉及到了属性提取和转换、逻辑回归、聚类和回归等机器学习方面的议题
- 本文简单介绍了超参优选
参考资料
- PySpark实战指南,Drabas and Lee著, 栾云杰等译