Kaggle比賽(一)Titanic: Machine Learning from Disaster
- 2019 年 10 月 3 日
- 筆記
泰坦尼克號倖存預測是本小白接觸的第一個Kaggle入門比賽,主要參考了以下兩篇教程:
本模型在Leaderboard上的最高得分為0.79904,排名前13%。
由於這個比賽做得比較早了,當時很多分析的細節都忘了,而且由於是第一次做,整體還是非常簡陋的。今天心血來潮,就當做個簡單的記錄(流水賬)。
導入相關包:
import numpy as np import pandas as pd import matplotlib.pyplot as plt import re from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LinearRegression from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier, GradientBoostingClassifier, VotingClassifier
讀取訓練、測試集,合併在一起處理:
train_raw = pd.read_csv('datasets/train.csv') test_raw = pd.read_csv('datasets/test.csv') train_test = train_raw.append(test_raw, ignore_index=True, sort=False)
姓名中的稱謂可以在一定程度上體現出人的性別、年齡、身份、社會地位等,因而是一個不可忽略的重要特徵。我們首先用正則表達式將Name欄位中的稱謂資訊提取出來,然後做歸類:
- Mr、Don代表男性
- Miss、Ms、Mlle代表未婚女子
- Mrs、Mme、Lady、Dona代表已婚女士
- Countess、Jonkheer均為貴族身份
- Capt、Col、Dr、Major、Sir這些少數稱謂歸為其他一類
train_test['Title'] = train_test['Name'].apply(lambda x: re.search('(w+).', x).group(1)) train_test['Title'].replace(['Don'], 'Mr', inplace=True) train_test['Title'].replace(['Mlle','Ms'], 'Miss', inplace=True) train_test['Title'].replace(['Mme', 'Lady', 'Dona'], 'Mrs', inplace=True) train_test['Title'].replace(['Countess', 'Jonkheer'], 'Noble', inplace=True) train_test['Title'].replace(['Capt', 'Col', 'Dr', 'Major', 'Sir'], 'Other', inplace=True)
對稱謂類別進行獨熱編碼(One-Hot encoding):
title_onehot = pd.get_dummies(train_test['Title'], prefix='Title') train_test = pd.concat([train_test, title_onehot], axis=1)
對性別進行獨熱處理:
sex_onehot = pd.get_dummies(train_test['Sex'], prefix='Sex') train_test = pd.concat([train_test, sex_onehot], axis=1)
將SibSp和Parch兩個特徵組合在一起,構造出表示家庭大小的特徵,因為分析表明有親人同行的乘客比獨自一人具有更高的存活率。
train_test['FamilySize'] = train_test['SibSp'] + train_test['Parch'] + 1
用眾數對Embarked填補缺失值:
train_test['Embarked'].fillna(train_test['Embarked'].mode()[0], inplace=True) embarked_onehot = pd.get_dummies(train_test['Embarked'], prefix='Embarked') train_test = pd.concat([train_test, embarked_onehot], axis=1)
由於Cabin缺失值太多,姑且將有無Cabin作為特徵:
train_test['Cabin'].fillna('NO', inplace=True) train_test['Cabin'] = np.where(train_test['Cabin'] == 'NO', 'NO', 'YES') cabin_onehot = pd.get_dummies(train_test['Cabin'], prefix='Cabin') train_test = pd.concat([train_test, cabin_onehot], axis=1)
用同等船艙的票價均值填補Fare的缺失值:
Ktrain_test['Fare'].fillna(train_test.groupby('Pclass')['Fare'].transform('mean'), inplace=True)
由於有團體票,我們將票價均攤到每個人身上:
shares = train_test.groupby('Ticket')['Fare'].transform('count') train_test['Fare'] = train_test['Fare'] / shares
票價分級:
train_test.loc[train_test['Fare'] < 5, 'Fare'] = 0 train_test.loc[(train_test['Fare'] >= 5) & (train_test['Fare'] < 10), 'Fare'] = 1 train_test.loc[(train_test['Fare'] >= 10) & (train_test['Fare'] < 15), 'Fare'] = 2 train_test.loc[(train_test['Fare'] >= 15) & (train_test['Fare'] < 30), 'Fare'] = 3 train_test.loc[(train_test['Fare'] >= 30) & (train_test['Fare'] < 60), 'Fare'] = 4 train_test.loc[(train_test['Fare'] >= 60) & (train_test['Fare'] < 100), 'Fare'] = 5 train_test.loc[train_test['Fare'] >= 100, 'Fare'] = 6
利用shares構造一個新的特徵,將買團體票的乘客分為一類,單獨買票的分為一類:
train_test['GroupTicket'] = np.where(shares == 1, 'NO', 'YES') group_ticket_onehot = pd.get_dummies(train_test['GroupTicket'], prefix='GroupTicket') train_test = pd.concat([train_test, group_ticket_onehot], axis=1)
對於缺失較多的Age項,直接用平均數或者中位數來填充不太合適。這裡我們用機器學習演算法,利用其他特徵來推測年齡。
missing_age_df = pd.DataFrame(train_test[['Age', 'Parch', 'Sex', 'SibSp', 'FamilySize', 'Title', 'Fare', 'Pclass', 'Embarked']]) missing_age_df = pd.get_dummies(missing_age_df,columns=['Title', 'FamilySize', 'Sex', 'Pclass' ,'Embarked']) missing_age_train = missing_age_df[missing_age_df['Age'].notnull()] missing_age_test = missing_age_df[missing_age_df['Age'].isnull()] def fill_missing_age(missing_age_train, missing_age_test): missing_age_X_train = missing_age_train.drop(['Age'], axis=1) missing_age_Y_train = missing_age_train['Age'] missing_age_X_test = missing_age_test.drop(['Age'], axis=1) # 模型1 gbm_reg = GradientBoostingRegressor(n_estimators=100, max_depth=3, learning_rate=0.01, max_features=3, random_state=42) gbm_reg.fit(missing_age_X_train, missing_age_Y_train) missing_age_test['Age_GB'] = gbm_reg.predict(missing_age_X_test) # 模型2 lrf_reg = LinearRegression(fit_intercept=True, normalize=True) lrf_reg.fit(missing_age_X_train, missing_age_Y_train) missing_age_test['Age_LRF'] = lrf_reg.predict(missing_age_X_test) # 將兩個模型預測後的均值作為最終預測結果 missing_age_test['Age'] = np.mean([missing_age_test['Age_GB'], missing_age_test['Age_LRF']]) return missing_age_test train_test.loc[(train_test.Age.isnull()), 'Age'] = fill_missing_age(missing_age_train, missing_age_test)
劃分年齡段:
train_test.loc[train_test['Age'] < 9, 'Age'] = 0 train_test.loc[(train_test['Age'] >= 9) & (train_test['Age'] < 18), 'Age'] = 1 train_test.loc[(train_test['Age'] >= 18) & (train_test['Age'] < 27), 'Age'] = 2 train_test.loc[(train_test['Age'] >= 27) & (train_test['Age'] < 36), 'Age'] = 3 train_test.loc[(train_test['Age'] >= 36) & (train_test['Age'] < 45), 'Age'] = 4 train_test.loc[(train_test['Age'] >= 45) & (train_test['Age'] < 54), 'Age'] = 5 train_test.loc[(train_test['Age'] >= 54) & (train_test['Age'] < 63), 'Age'] = 6 train_test.loc[(train_test['Age'] >= 63) & (train_test['Age'] < 72), 'Age'] = 7 train_test.loc[(train_test['Age'] >= 72) & (train_test['Age'] < 81), 'Age'] = 8 train_test.loc[train_test['Age'] >= 81, 'Age'] = 9
保存PassengerId:
passengerId_test = train_test['PassengerId'][891:]
丟棄多餘的特徵:
train_test.drop(['PassengerId', 'Name', 'SibSp', 'Parch', 'Title', 'Sex', 'Embarked', 'Cabin', 'Ticket', 'GroupTicket'], axis=1, inplace=True)
劃分訓練集和測試集:
train = train_test[:891] test = train_test[891:] X_train = train.drop(['Survived'], axis=1) y_train = train['Survived'] X_test = test.drop(['Survived'], axis=1)
分別用隨機森林、極端隨機樹和梯度提升樹進行訓練,然後利用VotingClassifer建立最終預測模型。
rf = RandomForestClassifier(n_estimators=500, max_depth=5, min_samples_split=13) et = ExtraTreesClassifier(n_estimators=500, max_depth=7, min_samples_split=8) gbm = GradientBoostingClassifier(n_estimators=500, learning_rate=0.0135) voting = VotingClassifier(estimators=[('rf', rf), ('et', et), ('gbm', gbm)], voting='soft') voting.fit(X_train, y_train)
預測並生成提交文件:
y_predict = voting.predict(X_test) submission = pd.DataFrame({'PassengerId': passengerId_test, 'Survived': y_predict.astype(np.int32)}) submission.to_csv('submission.csv', index=False)
