Tuning class of each ML estimator¶
Select appropreate tuning class for your machine learning estimator.
tune_easy.elasticnet_tuning module¶
- class tune_easy.elasticnet_tuning.ElasticNetTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for ElasticNet
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'alpha': (0.0001, 10), 'l1_ratio': (0, 1)}¶
- CV_PARAMS_GRID = {'alpha': [0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10], 'l1_ratio': [0, 0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 0.9, 0.95, 0.99, 1]}¶
- CV_PARAMS_RANDOM = {'alpha': [0.0001, 0.0002, 0.0004, 0.0007, 0.001, 0.002, 0.004, 0.007, 0.01, 0.02, 0.04, 0.07, 0.1, 0.2, 0.4, 0.7, 1, 2, 4, 7, 10], 'l1_ratio': [0, 0.0001, 0.0002, 0.0004, 0.0007, 0.001, 0.002, 0.004, 0.007, 0.01, 0.02, 0.04, 0.07, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 0.93, 0.96, 0.98, 0.99, 1]}¶
- ESTIMATOR = Pipeline(steps=[('scaler', StandardScaler()), ('enet', ElasticNet())])¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 10¶
- INT_PARAMS = []¶
- NOT_OPT_PARAMS = {}¶
- N_ITER_BAYES = 45¶
- N_ITER_OPTUNA = 70¶
- N_ITER_RANDOM = 250¶
- PARAM_SCALES = {'alpha': 'log', 'l1_ratio': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'alpha': [0, 1e-05, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 10, 100], 'l1_ratio': [0, 1e-05, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 0.7, 0.9, 0.97, 0.99, 1]}¶
tune_easy.lgbm_tuning module¶
- class tune_easy.lgbm_tuning.LGBMClassifierTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for LGBMClassifier
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'colsample_bytree': (0.4, 1.0), 'min_child_samples': (0, 50), 'num_leaves': (2, 50), 'reg_alpha': (0.0001, 0.1), 'reg_lambda': (0.0001, 0.1), 'subsample': (0.4, 1.0), 'subsample_freq': (0, 7)}¶
- CV_PARAMS_GRID = {'colsample_bytree': [0.4, 1.0], 'min_child_samples': [2, 10, 50], 'num_leaves': [2, 10, 50], 'reg_alpha': [0.0001, 0.003, 0.1], 'reg_lambda': [0.0001, 0.1], 'subsample': [0.4, 1.0], 'subsample_freq': [0, 7]}¶
- CV_PARAMS_RANDOM = {'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_samples': [0, 2, 8, 14, 20, 26, 32, 38, 44, 50], 'num_leaves': [2, 8, 14, 20, 26, 32, 38, 44, 50], 'reg_alpha': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'reg_lambda': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'subsample': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'subsample_freq': [0, 1, 2, 3, 4, 5, 6, 7]}¶
- ESTIMATOR = LGBMClassifier()¶
- FIT_PARAMS = {'early_stopping_rounds': 10, 'eval_metric': 'binary_logloss', 'verbose': 0}¶
- INIT_POINTS = 10¶
- INT_PARAMS = ['num_leaves', 'subsample_freq', 'min_child_samples']¶
- NOT_OPT_PARAMS = {'boosting_type': 'gbdt', 'n_estimators': 10000, 'objective': None, 'random_state': 42}¶
- N_ITER_BAYES = 60¶
- N_ITER_OPTUNA = 200¶
- N_ITER_RANDOM = 400¶
- PARAM_SCALES = {'colsample_bytree': 'linear', 'min_child_samples': 'linear', 'num_leaves': 'linear', 'reg_alpha': 'log', 'reg_lambda': 'log', 'subsample': 'linear', 'subsample_freq': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'colsample_bytree': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_samples': [0, 2, 5, 10, 20, 30, 50, 70, 100], 'num_leaves': [2, 4, 8, 16, 32, 64, 96, 128, 192, 256], 'reg_alpha': [0, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 1, 10], 'reg_lambda': [0, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 1, 10], 'subsample': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'subsample_freq': [0, 1, 2, 3, 4, 5, 6, 7]}¶
- class tune_easy.lgbm_tuning.LGBMRegressorTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for LGBMRegressor
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'colsample_bytree': (0.4, 1.0), 'min_child_samples': (0, 50), 'num_leaves': (2, 50), 'reg_alpha': (0.0001, 0.1), 'reg_lambda': (0.0001, 0.1), 'subsample': (0.4, 1.0), 'subsample_freq': (0, 7)}¶
- CV_PARAMS_GRID = {'colsample_bytree': [0.4, 1.0], 'min_child_samples': [2, 10, 50], 'num_leaves': [2, 10, 50], 'reg_alpha': [0.0001, 0.003, 0.1], 'reg_lambda': [0.0001, 0.1], 'subsample': [0.4, 1.0], 'subsample_freq': [0, 7]}¶
- CV_PARAMS_RANDOM = {'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_samples': [0, 2, 8, 14, 20, 26, 32, 38, 44, 50], 'num_leaves': [2, 8, 14, 20, 26, 32, 38, 44, 50], 'reg_alpha': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'reg_lambda': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'subsample': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'subsample_freq': [0, 1, 2, 3, 4, 5, 6, 7]}¶
- ESTIMATOR = LGBMRegressor()¶
- FIT_PARAMS = {'early_stopping_rounds': 10, 'eval_metric': 'rmse', 'verbose': 0}¶
- INIT_POINTS = 10¶
- INT_PARAMS = ['num_leaves', 'subsample_freq', 'min_child_samples']¶
- NOT_OPT_PARAMS = {'boosting_type': 'gbdt', 'n_estimators': 10000, 'objective': 'regression', 'random_state': 42}¶
- N_ITER_BAYES = 60¶
- N_ITER_OPTUNA = 200¶
- N_ITER_RANDOM = 400¶
- PARAM_SCALES = {'colsample_bytree': 'linear', 'min_child_samples': 'linear', 'num_leaves': 'linear', 'reg_alpha': 'log', 'reg_lambda': 'log', 'subsample': 'linear', 'subsample_freq': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'colsample_bytree': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_samples': [0, 2, 5, 10, 20, 30, 50, 70, 100], 'num_leaves': [2, 4, 8, 16, 32, 64, 96, 128, 192, 256], 'reg_alpha': [0, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 1, 10], 'reg_lambda': [0, 0.0001, 0.001, 0.003, 0.01, 0.03, 0.1, 1, 10], 'subsample': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'subsample_freq': [0, 1, 2, 3, 4, 5, 6, 7]}¶
tune_easy.logisticregression_tuning module¶
- class tune_easy.logisticregression_tuning.LogisticRegressionTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for LogisticRegression
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'C': (0.01, 1000)}¶
- CV_PARAMS_GRID = {'C': [0.01, 0.01778279410038923, 0.03162277660168379, 0.05623413251903491, 0.1, 0.1778279410038923, 0.31622776601683794, 0.5623413251903491, 1.0, 1.7782794100389228, 3.1622776601683795, 5.623413251903491, 10.0, 17.78279410038923, 31.622776601683793, 56.23413251903491, 100.0, 177.82794100389228, 316.22776601683796, 562.341325190349, 1000.0]}¶
- CV_PARAMS_RANDOM = {'C': [0.01, 0.015848931924611134, 0.025118864315095794, 0.039810717055349734, 0.06309573444801933, 0.1, 0.15848931924611143, 0.25118864315095807, 0.3981071705534973, 0.6309573444801934, 1.0, 1.584893192461114, 2.5118864315095824, 3.981071705534973, 6.309573444801936, 10.0, 15.848931924611142, 25.11886431509582, 39.810717055349734, 63.095734448019364, 100.0, 158.48931924611142, 251.18864315095823, 398.1071705534977, 630.9573444801943, 1000.0]}¶
- ESTIMATOR = Pipeline(steps=[('scaler', StandardScaler()), ('logr', LogisticRegression())])¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 5¶
- INT_PARAMS = []¶
- NOT_OPT_PARAMS = {'penalty': 'l2', 'solver': 'lbfgs'}¶
- N_ITER_BAYES = 20¶
- N_ITER_OPTUNA = 25¶
- N_ITER_RANDOM = 25¶
- PARAM_SCALES = {'C': 'log', 'l1_ratio': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'C': [0.001, 0.0031622776601683794, 0.01, 0.03162277660168379, 0.1, 0.31622776601683794, 1.0, 3.1622776601683795, 10.0, 31.622776601683793, 100.0, 316.22776601683796, 1000.0, 3162.2776601683795, 10000.0]}¶
tune_easy.rf_tuning module¶
- class tune_easy.rf_tuning.RFClassifierTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for RandomForestClassifier
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'max_depth': (2, 32), 'max_features': (1, 64), 'min_samples_leaf': (1, 16), 'min_samples_split': (2, 32), 'n_estimators': (20, 160)}¶
- CV_PARAMS_GRID = {'max_depth': [2, 8, 32], 'max_features': ['auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 4, 16], 'min_samples_split': [2, 8, 32], 'n_estimators': [20, 80, 160]}¶
- CV_PARAMS_RANDOM = {'max_depth': [2, 3, 4, 6, 8, 12, 16, 24, 32], 'max_features': ['auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 2, 3, 4, 6, 8, 12, 16], 'min_samples_split': [2, 3, 4, 6, 8, 12, 16, 24, 32], 'n_estimators': [20, 30, 40, 60, 80, 120, 160]}¶
- ESTIMATOR = RandomForestClassifier()¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 80¶
- INT_PARAMS = ['n_estimators', 'max_features', 'max_depth', 'min_samples_split', 'min_samples_leaf']¶
- NOT_OPT_PARAMS = {'random_state': 42}¶
- N_ITER_BAYES = 10¶
- N_ITER_OPTUNA = 120¶
- N_ITER_RANDOM = 150¶
- PARAM_SCALES = {'max_depth': 'linear', 'max_features': 'linear', 'min_samples_leaf': 'linear', 'min_samples_split': 'linear', 'n_estimators': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'max_depth': [1, 2, 4, 6, 8, 12, 16, 24, 32, 48], 'max_features': [1, 2, 'auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 2, 4, 6, 8, 12, 16, 24, 32], 'min_samples_split': [2, 4, 6, 8, 12, 16, 24, 32, 48], 'n_estimators': [10, 20, 30, 40, 60, 80, 120, 160, 240]}¶
- class tune_easy.rf_tuning.RFRegressorTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for RandomForestRegressor
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'max_depth': (2, 32), 'max_features': (1, 64), 'min_samples_leaf': (1, 16), 'min_samples_split': (2, 32), 'n_estimators': (20, 160)}¶
- CV_PARAMS_GRID = {'max_depth': [2, 8, 32], 'max_features': ['auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 4, 16], 'min_samples_split': [2, 8, 32], 'n_estimators': [20, 80, 160]}¶
- CV_PARAMS_RANDOM = {'max_depth': [2, 3, 4, 6, 8, 12, 16, 24, 32], 'max_features': ['auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 2, 3, 4, 6, 8, 12, 16], 'min_samples_split': [2, 3, 4, 6, 8, 12, 16, 24, 32], 'n_estimators': [20, 30, 40, 60, 80, 120, 160]}¶
- ESTIMATOR = RandomForestRegressor()¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 80¶
- INT_PARAMS = ['n_estimators', 'max_features', 'max_depth', 'min_samples_split', 'min_samples_leaf']¶
- NOT_OPT_PARAMS = {'random_state': 42}¶
- N_ITER_BAYES = 10¶
- N_ITER_OPTUNA = 120¶
- N_ITER_RANDOM = 150¶
- PARAM_SCALES = {'max_depth': 'linear', 'max_features': 'linear', 'min_samples_leaf': 'linear', 'min_samples_split': 'linear', 'n_estimators': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'max_depth': [1, 2, 4, 6, 8, 12, 16, 24, 32, 48], 'max_features': [1, 2, 'auto', 'sqrt', 'log2'], 'min_samples_leaf': [1, 2, 4, 6, 8, 12, 16, 24, 32], 'min_samples_split': [2, 4, 6, 8, 12, 16, 24, 32, 48], 'n_estimators': [10, 20, 30, 40, 60, 80, 120, 160, 240]}¶
tune_easy.svm_tuning module¶
- class tune_easy.svm_tuning.SVMClassifierTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for SVC
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'C': (0.01, 100), 'gamma': (0.01, 100)}¶
- CV_PARAMS_GRID = {'C': [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100], 'gamma': [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100]}¶
- CV_PARAMS_RANDOM = {'C': [0.01, 0.02, 0.04, 0.07, 0.1, 0.2, 0.4, 0.7, 1, 2, 4, 7, 10, 20, 40, 70, 100], 'gamma': [0.01, 0.02, 0.04, 0.07, 0.1, 0.2, 0.4, 0.7, 1, 2, 4, 7, 10, 20, 40, 70, 100]}¶
- ESTIMATOR = Pipeline(steps=[('scaler', StandardScaler()), ('svc', SVC())])¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 10¶
- INT_PARAMS = []¶
- NOT_OPT_PARAMS = {'kernel': 'rbf'}¶
- N_ITER_BAYES = 80¶
- N_ITER_OPTUNA = 120¶
- N_ITER_RANDOM = 160¶
- PARAM_SCALES = {'C': 'log', 'gamma': 'log'}¶
- VALIDATION_CURVE_PARAMS = {'C': [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000], 'gamma': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000]}¶
- class tune_easy.svm_tuning.SVMRegressorTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for SVR
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'C': (0.01, 10), 'epsilon': (0, 0.2), 'gamma': (0.001, 10)}¶
- CV_PARAMS_GRID = {'C': [0.01, 0.1, 0.3, 1, 3, 10], 'epsilon': [0, 0.01, 0.02, 0.05, 0.1, 0.2], 'gamma': [0.001, 0.01, 0.03, 0.1, 0.3, 1, 10]}¶
- CV_PARAMS_RANDOM = {'C': [0.01, 0.1, 0.2, 0.5, 1, 2, 5, 10], 'epsilon': [0, 0.01, 0.02, 0.03, 0.05, 0.1, 0.15, 0.2], 'gamma': [0.001, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10]}¶
- ESTIMATOR = Pipeline(steps=[('scaler', StandardScaler()), ('svr', SVR())])¶
- FIT_PARAMS = {}¶
- INIT_POINTS = 20¶
- INT_PARAMS = []¶
- NOT_OPT_PARAMS = {'kernel': 'rbf'}¶
- N_ITER_BAYES = 100¶
- N_ITER_OPTUNA = 300¶
- N_ITER_RANDOM = 250¶
- PARAM_SCALES = {'C': 'log', 'epsilon': 'linear', 'gamma': 'log'}¶
- VALIDATION_CURVE_PARAMS = {'C': [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000], 'epsilon': [0, 0.01, 0.02, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5], 'gamma': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000]}¶
tune_easy.xgb_tuning module¶
- class tune_easy.xgb_tuning.XGBClassifierTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for XGBClassifier
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'colsample_bytree': (0.2, 1.0), 'gamma': (0.0001, 0.1), 'learning_rate': (0.05, 0.3), 'max_depth': (2, 9), 'min_child_weight': (1, 10), 'reg_alpha': (0.001, 0.1), 'reg_lambda': (0.001, 0.1), 'subsample': (0.2, 1.0)}¶
- CV_PARAMS_GRID = {'colsample_bytree': [0.2, 0.5, 1.0], 'learning_rate': [0.05, 0.3], 'max_depth': [2, 9], 'min_child_weight': [1, 4, 10], 'reg_lambda': [0.1, 1], 'subsample': [0.2, 0.5, 0.8]}¶
- CV_PARAMS_RANDOM = {'colsample_bytree': [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'gamma': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'learning_rate': [0.05, 0.1, 0.2, 0.3], 'max_depth': [2, 3, 4, 5, 6, 7, 8, 9], 'min_child_weight': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'reg_alpha': [0.001, 0.003, 0.01, 0.03, 0.1], 'reg_lambda': [0.001, 0.003, 0.01, 0.03, 0.1], 'subsample': [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}¶
- ESTIMATOR = XGBClassifier(base_score=None, booster=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, gamma=None, gpu_id=None, importance_type='gain', interaction_constraints=None, learning_rate=None, max_delta_step=None, max_depth=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, random_state=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, subsample=None, tree_method=None, validate_parameters=None, verbosity=None)¶
- FIT_PARAMS = {'early_stopping_rounds': 10, 'eval_metric': 'logloss', 'verbose': 0}¶
- INIT_POINTS = 10¶
- INT_PARAMS = ['min_child_weight', 'max_depth']¶
- NOT_OPT_PARAMS = {'booster': 'gbtree', 'n_estimators': 10000, 'objective': None, 'random_state': 42, 'use_label_encoder': False}¶
- N_ITER_BAYES = 60¶
- N_ITER_OPTUNA = 120¶
- N_ITER_RANDOM = 200¶
- PARAM_SCALES = {'colsample_bytree': 'linear', 'gamma': 'log', 'learning_rate': 'log', 'max_depth': 'linear', 'min_child_weight': 'linear', 'reg_alpha': 'log', 'reg_lambda': 'log', 'subsample': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'colsample_bytree': [0, 0.2, 0.4, 0.6, 0.8, 1.0], 'gamma': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'learning_rate': [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0], 'max_depth': [1, 2, 3, 4, 6, 8, 10], 'min_child_weight': [1, 3, 5, 7, 9, 11, 15], 'reg_alpha': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'reg_lambda': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'subsample': [0.1, 0.2, 0.4, 0.6, 0.8, 1.0]}¶
- class tune_easy.xgb_tuning.XGBRegressorTuning(X, y, x_colnames, y_colname=None, cv_group=None, eval_set_selection=None, **kwargs)¶
Bases:
tune_easy.param_tuning.ParamTuning
Tuning class for XGBRegressor
See
tune_easy.param_tuning.ParamTuning
to see API Reference of all methods- BAYES_PARAMS = {'colsample_bytree': (0.2, 1.0), 'gamma': (0.0001, 0.1), 'learning_rate': (0.05, 0.3), 'max_depth': (2, 9), 'min_child_weight': (1, 10), 'reg_alpha': (0.001, 0.1), 'reg_lambda': (0.001, 0.1), 'subsample': (0.2, 1.0)}¶
- CV_PARAMS_GRID = {'colsample_bytree': [0.2, 0.5, 1.0], 'learning_rate': [0.05, 0.3], 'max_depth': [2, 9], 'min_child_weight': [1, 4, 10], 'reg_lambda': [0.1, 1], 'subsample': [0.2, 0.5, 0.8]}¶
- CV_PARAMS_RANDOM = {'colsample_bytree': [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'gamma': [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1], 'learning_rate': [0.05, 0.1, 0.2, 0.3], 'max_depth': [2, 3, 4, 5, 6, 7, 8, 9], 'min_child_weight': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'reg_alpha': [0.001, 0.003, 0.01, 0.03, 0.1], 'reg_lambda': [0.001, 0.003, 0.01, 0.03, 0.1], 'subsample': [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}¶
- ESTIMATOR = XGBRegressor(base_score=None, booster=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, gamma=None, gpu_id=None, importance_type='gain', interaction_constraints=None, learning_rate=None, max_delta_step=None, max_depth=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, random_state=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, subsample=None, tree_method=None, validate_parameters=None, verbosity=None)¶
- FIT_PARAMS = {'early_stopping_rounds': 10, 'eval_metric': 'rmse', 'verbose': 0}¶
- INIT_POINTS = 10¶
- INT_PARAMS = ['min_child_weight', 'max_depth']¶
- NOT_OPT_PARAMS = {'booster': 'gbtree', 'n_estimators': 10000, 'objective': 'reg:squarederror', 'random_state': 42}¶
- N_ITER_BAYES = 60¶
- N_ITER_OPTUNA = 120¶
- N_ITER_RANDOM = 200¶
- PARAM_SCALES = {'colsample_bytree': 'linear', 'gamma': 'log', 'learning_rate': 'log', 'max_depth': 'linear', 'min_child_weight': 'linear', 'reg_alpha': 'log', 'reg_lambda': 'log', 'subsample': 'linear'}¶
- VALIDATION_CURVE_PARAMS = {'colsample_bytree': [0, 0.2, 0.4, 0.6, 0.8, 1.0], 'gamma': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'learning_rate': [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0], 'max_depth': [1, 2, 3, 4, 6, 8, 10], 'min_child_weight': [1, 3, 5, 7, 9, 11, 15], 'reg_alpha': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'reg_lambda': [0, 0.0001, 0.001, 0.01, 0.03, 0.1, 0.3, 1.0], 'subsample': [0.1, 0.2, 0.4, 0.6, 0.8, 1.0]}¶