mindfoundry.optaas.client.sklearn_pipelines.estimators package
Submodules
mindfoundry.optaas.client.sklearn_pipelines.estimators.ada_boost module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ada_boost.AdaBoostClassifier(base_estimator=None, *, n_estimators=50, learning_rate=1.0, algorithm='SAMME.R', random_state=None)[source]
Bases:
AdaBoostClassifier
,_OptimizableAdaBoost
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
Generates
Parameters
,Constraints
, andPriorMeans
to optimize aAdaBoostClassifier
.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ada_boost.AdaBoostRegressor(base_estimator=None, *, n_estimators=50, learning_rate=1.0, loss='linear', random_state=None)[source]
Bases:
AdaBoostRegressor
,_OptimizableAdaBoost
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
Generates
Parameters
,Constraints
, andPriorMeans
to optimize aAdaBoostRegressor
.
mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.DecisionTreeClassifier(*, criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0)[source]
Bases:
DecisionTreeClassifier
,_OptimizableDecisionTree
,_OptimizableEnsembleClassifier
Allows us to optimize a
DecisionTreeClassifier
.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.DecisionTreeRegressor(*, criterion='squared_error', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, ccp_alpha=0.0)[source]
Bases:
DecisionTreeRegressor
,_OptimizableDecisionTree
Allows us to optimize a
DecisionTreeRegressor
.- criterion_values: Sequence[str] = ['mse', 'friedman_mse', 'mae', 'squared_error']
- min_impurity_decrease_maximum: float = 0.01
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.ExtraTreesClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)[source]
Bases:
ExtraTreesClassifier
,_OptimizableEnsembleClassifier
,_OptimizableExtraTrees
Allows us to optimize
ExtraTreesClassifier
estimators.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.ExtraTreesRegressor(n_estimators=100, *, criterion='squared_error', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None)[source]
Bases:
ExtraTreesRegressor
,_OptimizableEnsembleRegressor
,_OptimizableExtraTrees
Allows us to optimize
ExtraTreesRegressor
estimators.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.GradientBoostingClassifier(*, loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)[source]
Bases:
GradientBoostingClassifier
,_OptimizableGradientBoosting
Allows us to optimize a
GradientBoostingClassifier
.- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
- Generates
Parameters
,Constraints
and
PriorMeans
andConstraints
to optimize aGradientBoostingClassifier
.
- Parameters
class_count (int, optional) – Number of classes in the classification dataset. If set to a number other than 2, the loss parameter will not be optimized (because it can only be set to “deviance”).
- Generates
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.GradientBoostingRegressor(*, loss='squared_error', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)[source]
Bases:
GradientBoostingRegressor
,_OptimizableGradientBoosting
Allows us to optimize a
GradientBoostingRegressor
.- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
Generates
Parameters
,Constraints
andPriorMeans
to optimize aGradientBoostingClassifier
.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.RandomForestClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)[source]
Bases:
RandomForestClassifier
,_OptimizableEnsembleClassifier
,_OptimizableRandomForest
Allows us to optimize
RandomForestClassifier
estimators.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ensemble.RandomForestRegressor(n_estimators=100, *, criterion='squared_error', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None)[source]
Bases:
RandomForestRegressor
,_OptimizableEnsembleRegressor
,_OptimizableRandomForest
Allows us to optimize
RandomForestRegressor
estimators.
mindfoundry.optaas.client.sklearn_pipelines.estimators.ica module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.ica.FastICA(n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None)[source]
Bases:
FastICA
,OptimizableBaseEstimator
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
- Generates
Parameters
,Constraints
and
PriorMeans
to optimize aFastICA
estimator.
- Parameters
feature_count (int) – Total number of features in your dataset.
- Generates
mindfoundry.optaas.client.sklearn_pipelines.estimators.k_neighbors module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.k_neighbors.KNeighborsClassifier(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)[source]
Bases:
KNeighborsClassifier
,_OptimizableKNeighbors
Allows us to optimize
KNeighborsClassifier
estimators.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.k_neighbors.KNeighborsRegressor(n_neighbors=5, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None)[source]
Bases:
KNeighborsRegressor
,_OptimizableKNeighbors
Allows us to optimize
KNeighborsRegressor
estimators.
mindfoundry.optaas.client.sklearn_pipelines.estimators.linear_model module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.linear_model.Lasso(alpha=1.0, *, fit_intercept=True, normalize='deprecated', precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')[source]
Bases:
Lasso
,_OptimizableLinearModel
Allows us to optimize a
Lasso
estimator.
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.linear_model.Ridge(alpha=1.0, *, fit_intercept=True, normalize='deprecated', copy_X=True, max_iter=None, tol=0.001, solver='auto', positive=False, random_state=None)[source]
Bases:
Ridge
,_OptimizableLinearModel
Allows us to optimize a
Ridge
estimator.
mindfoundry.optaas.client.sklearn_pipelines.estimators.pca module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.pca.PCA(n_components=None, *, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None)[source]
Bases:
PCA
,OptimizableBaseEstimator
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
- Generates
Parameters
,Constraints
and
PriorMeans
to optimize aPCA
estimator.
- Parameters
feature_count (int) – Total number of features in your dataset.
- Generates
mindfoundry.optaas.client.sklearn_pipelines.estimators.svc module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.svc.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000)[source]
Bases:
LinearSVC
,_OptimizableSVC
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.svc.SVC(*, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None)[source]
Bases:
SVC
,_OptimizableSVC
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
Generates
Parameters
,Constraints
andPriorMeans
to optimize anSVC
estimator.
mindfoundry.optaas.client.sklearn_pipelines.estimators.voting module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.voting.VotingClassifier(estimators, *, voting='hard', weights=None, n_jobs=None, flatten_transform=True, verbose=False)[source]
Bases:
VotingClassifier
,OptimizableBaseEstimator
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
Generates
Parameters
,Constraints
andPriorMeans
to optimize aVotingClassifier
.
- steps: List[Any]
mindfoundry.optaas.client.sklearn_pipelines.estimators.xgboost module
- class mindfoundry.optaas.client.sklearn_pipelines.estimators.xgboost.XGBClassifier(*, objective: Optional[Union[str, Callable[[ndarray, ndarray], Tuple[ndarray, ndarray]]]] = 'binary:logistic', use_label_encoder: bool = True, **kwargs: Any)[source]
Bases:
XGBClassifier
,OptimizableBaseEstimator
- make_parameters_constraints_and_prior_means(sk: SklearnParameterMaker, **kwargs) Tuple[List[Parameter], List[Constraint], List[PriorMeanExpression]] [source]
- Generates
Parameters
,Constraints
and
PriorMeans
to optimize aXGBClassifier
estimator.
- Parameters
gpu_enabled (bool, optional, default=False) – If True, the objective parameter will include gpu-specific values such as ‘gpu:reg:linear’.
- Generates