Conditions | 1 |
Total Lines | 60 |
Code Lines | 20 |
Lines | 0 |
Ratio | 0 % |
Changes | 0 |
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
1 | # Author: Simon Blanke |
||
26 | def __init__(self, *args, **kwargs): |
||
27 | |||
28 | """ |
||
29 | |||
30 | Parameters |
||
31 | ---------- |
||
32 | |||
33 | search_config: dict |
||
34 | A dictionary providing the model and hyperparameter search space for the |
||
35 | optimization process. |
||
36 | n_iter: int |
||
37 | The number of iterations the optimizer performs. |
||
38 | metric: string, optional (default: "accuracy") |
||
39 | The metric the model is evaluated by. |
||
40 | n_jobs: int, optional (default: 1) |
||
41 | The number of searches to run in parallel. |
||
42 | cv: int, optional (default: 3) |
||
43 | The number of folds for the cross validation. |
||
44 | verbosity: int, optional (default: 1) |
||
45 | Verbosity level. 1 prints out warm_start points and their scores. |
||
46 | random_state: int, optional (default: None) |
||
47 | Sets the random seed. |
||
48 | warm_start: dict, optional (default: False) |
||
49 | Dictionary that definies a start point for the optimizer. |
||
50 | memory: bool, optional (default: True) |
||
51 | A memory, that saves the evaluation during the optimization to save time when |
||
52 | optimizer returns to position. |
||
53 | scatter_init: int, optional (default: False) |
||
54 | Defines the number n of random positions that should be evaluated with 1/n the |
||
55 | training data, to find a better initial position. |
||
56 | |||
57 | Returns |
||
58 | ------- |
||
59 | None |
||
60 | |||
61 | """ |
||
62 | |||
63 | optimizer_dict = { |
||
64 | "HillClimbing": HillClimbingOptimizer, |
||
65 | "StochasticHillClimbing": StochasticHillClimbingOptimizer, |
||
66 | "TabuSearch": TabuOptimizer, |
||
67 | "RandomSearch": RandomSearchOptimizer, |
||
68 | "RandomRestartHillClimbing": RandomRestartHillClimbingOptimizer, |
||
69 | "RandomAnnealing": RandomAnnealingOptimizer, |
||
70 | "SimulatedAnnealing": SimulatedAnnealingOptimizer, |
||
71 | "StochasticTunneling": StochasticTunnelingOptimizer, |
||
72 | "ParallelTempering": ParallelTemperingOptimizer, |
||
73 | "ParticleSwarm": ParticleSwarmOptimizer, |
||
74 | "EvolutionStrategy": EvolutionStrategyOptimizer, |
||
75 | "Bayesian": BayesianOptimizer, |
||
76 | } |
||
77 | |||
78 | _core_ = Core(*args, **kwargs) |
||
79 | _arg_ = Arguments(**_core_.opt_para) |
||
80 | |||
81 | optimizer_class = optimizer_dict[_core_.optimizer] |
||
82 | self._optimizer_ = optimizer_class(_core_, _arg_) |
||
83 | |||
84 | self.pos_list = self._optimizer_.pos_list |
||
85 | self.score_list = self._optimizer_.score_list |
||
86 | |||
117 |