| Conditions | 1 |
| Total Lines | 57 |
| Code Lines | 18 |
| Lines | 0 |
| Ratio | 0 % |
| Changes | 0 | ||
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
| 1 | # Author: Simon Blanke |
||
| 25 | def __init__(self, *args, **kwargs): |
||
| 26 | |||
| 27 | """ |
||
| 28 | |||
| 29 | Parameters |
||
| 30 | ---------- |
||
| 31 | |||
| 32 | search_config: dict |
||
| 33 | A dictionary providing the model and hyperparameter search space for the |
||
| 34 | optimization process. |
||
| 35 | n_iter: int |
||
| 36 | The number of iterations the optimizer performs. |
||
| 37 | metric: string, optional (default: "accuracy") |
||
| 38 | The metric the model is evaluated by. |
||
| 39 | n_jobs: int, optional (default: 1) |
||
| 40 | The number of searches to run in parallel. |
||
| 41 | cv: int, optional (default: 3) |
||
| 42 | The number of folds for the cross validation. |
||
| 43 | verbosity: int, optional (default: 1) |
||
| 44 | Verbosity level. 1 prints out warm_start points and their scores. |
||
| 45 | random_state: int, optional (default: None) |
||
| 46 | Sets the random seed. |
||
| 47 | warm_start: dict, optional (default: False) |
||
| 48 | Dictionary that definies a start point for the optimizer. |
||
| 49 | memory: bool, optional (default: True) |
||
| 50 | A memory, that saves the evaluation during the optimization to save time when |
||
| 51 | optimizer returns to position. |
||
| 52 | scatter_init: int, optional (default: False) |
||
| 53 | Defines the number n of random positions that should be evaluated with 1/n the |
||
| 54 | training data, to find a better initial position. |
||
| 55 | |||
| 56 | Returns |
||
| 57 | ------- |
||
| 58 | None |
||
| 59 | |||
| 60 | """ |
||
| 61 | |||
| 62 | optimizer_dict = { |
||
| 63 | "HillClimbing": HillClimbingOptimizer, |
||
| 64 | "StochasticHillClimbing": StochasticHillClimbingOptimizer, |
||
| 65 | "TabuSearch": TabuOptimizer, |
||
| 66 | "RandomSearch": RandomSearchOptimizer, |
||
| 67 | "RandomRestartHillClimbing": RandomRestartHillClimbingOptimizer, |
||
| 68 | "RandomAnnealing": RandomAnnealingOptimizer, |
||
| 69 | "SimulatedAnnealing": SimulatedAnnealingOptimizer, |
||
| 70 | "StochasticTunneling": StochasticTunnelingOptimizer, |
||
| 71 | "ParallelTempering": ParallelTemperingOptimizer, |
||
| 72 | "ParticleSwarm": ParticleSwarmOptimizer, |
||
| 73 | "EvolutionStrategy": EvolutionStrategyOptimizer, |
||
| 74 | "Bayesian": BayesianOptimizer, |
||
| 75 | } |
||
| 76 | |||
| 77 | _config_ = Config(*args, **kwargs) |
||
| 78 | _arg_ = Arguments(**_config_.opt_para) |
||
| 79 | |||
| 80 | optimizer_class = optimizer_dict[_config_.optimizer] |
||
| 81 | self._optimizer_ = optimizer_class(_config_, _arg_) |
||
| 82 | |||
| 98 |