Passed
Pull Request — master (#110)
by
unknown
01:31
created

optimizers   A

Complexity

Total Complexity 5

Size/Duplication

Total Lines 223
Duplicated Lines 0 %

Importance

Changes 0
Metric Value
wmc 5
eloc 37
dl 0
loc 223
rs 10
c 0
b 0
f 0

5 Methods

Rating   Name   Duplication   Size   Complexity  
A MyOptimizer._run() 0 18 1
A MyOptimizer.__init__() 0 12 1
A MyOptimizer._paramnames() 0 11 1
A MyOptimizer.get_test_params() 0 41 1
A MyOptimizer.get_search_config() 0 12 1
1
# copyright: hyperactive developers, MIT License (see LICENSE file)
2
"""Extension template for optimizers.
3
4
Purpose of this implementation template:
5
    quick implementation of new estimators following the template
6
    NOT a concrete class to import! This is NOT a base class or concrete class!
7
    This is to be used as a "fill-in" coding template.
8
9
How to use this implementation template to implement a new estimator:
10
- make a copy of the template in a suitable location, give it a descriptive name.
11
- work through all the "todo" comments below
12
- fill in code for mandatory methods, and optionally for optional methods
13
- do not write to reserved variables: _tags, _tags_dynamic
14
- you can add more private methods, but do not override BaseEstimator's private methods
15
    an easy way to be safe is to prefix your methods with "_custom"
16
- change docstrings for functions and the file
17
- ensure interface compatibility by hyperactive.utils.check_estimator
18
- once complete: use as a local library, or contribute to hyperactive via PR
19
20
Mandatory methods:
21
    scoring         - _score(self, params: dict) -> np.float64
22
    parameter names - _paramnames(self) -> list[str]
23
24
Testing - required for automated test framework and check_estimator usage:
25
    get default parameters for test instance(s) - get_test_params()
26
"""
27
# todo: write an informative docstring for the file or module, remove the above
28
# todo: add an appropriate copyright notice for your estimator
29
#       estimators contributed should have the copyright notice at the top
30
#       estimators of your own do not need to have permissive or MIT copyright
31
32
# todo: uncomment the following line, enter authors' GitHub IDs
33
# __author__ = [authorGitHubID, anotherAuthorGitHubID]
34
35
from hyperactive.base import BaseOptimizer
36
37
# todo: add any necessary imports here
38
39
# todo: for imports of soft dependencies:
40
# make sure to fill in the "python_dependencies" tag with the package import name
41
# import soft dependencies only inside methods of the class, not at the top of the file
42
43
44
class MyOptimizer(BaseOptimizer):
45
    """Custom optimizer. todo: write docstring.
46
47
    todo: describe your custom optimizer here
48
49
    Parameters
50
    ----------
51
    parama : int
52
        descriptive explanation of parama
53
    paramb : string, optional (default='default')
54
        descriptive explanation of paramb
55
    paramc : boolean, optional (default=MyOtherEstimator(foo=42))
56
        descriptive explanation of paramc
57
    and so on
58
59
    Examples
60
    --------
61
    >>> from somehwere import MyOptimizer
62
    >>> great_example(code)
63
    >>> multi_line_expressions(
64
    ...     require_dots_on_new_lines_so_that_expression_continues_properly
65
    ... )
66
    """
67
68
    # todo: fill in tags - most tags have sensible defaults below
69
    _tags = {
70
        # tags and full specifications are available in the tag API reference
71
        # TO BE ADDED
72
        #
73
        # --------------
74
        # packaging info
75
        # --------------
76
        #
77
        # ownership and contribution tags
78
        # -------------------------------
79
        #
80
        # author = author(s) of th estimator
81
        # an author is anyone with significant contribution to the code at some point
82
        "authors": ["author1", "author2"],
83
        # valid values: str or list of str, should be GitHub handles
84
        # this should follow best scientific contribution practices
85
        # scope is the code, not the methodology (method is per paper citation)
86
        # if interfacing a 3rd party estimator, ensure to give credit to the
87
        # authors of the interfaced estimator
88
        #
89
        # maintainer = current maintainer(s) of the estimator
90
        # per algorithm maintainer role, see governance document
91
        # this is an "owner" type role, with rights and maintenance duties
92
        # for 3rd party interfaces, the scope is the class only
93
        "maintainers": ["maintainer1", "maintainer2"],
94
        # valid values: str or list of str, should be GitHub handles
95
        # remove tag if maintained by package core team
96
        #
97
        # dependency tags: python version and soft dependencies
98
        # -----------------------------------------------------
99
        #
100
        # python version requirement
101
        "python_version": None,
102
        # valid values: str, PEP 440 valid python version specifiers
103
        # raises exception at construction if local python version is incompatible
104
        # delete tag if no python version requirement
105
        #
106
        # soft dependency requirement
107
        "python_dependencies": None,
108
        # valid values: str or list of str, PEP 440 valid package version specifiers
109
        # raises exception at construction if modules at strings cannot be imported
110
        # delete tag if no soft dependency requirement
111
    }
112
113
    # todo: add any hyper-parameters and components to constructor
114
    def __init__(self, parama, paramb="default", paramc=None, experiment=None):
115
        # todo: write any hyper-parameters to self
116
        self.parama = parama
117
        self.paramb = paramb
118
        self.paramc = paramc
119
        # IMPORTANT: the self.params should never be overwritten or mutated from now on
120
        # for handling defaults etc, write to other attributes, e.g., self._parama
121
        self.experiment = experiment
122
        # IMPORTANT: experiment must come last, and have default value None
123
124
        # leave this as is
125
        super().__init__()
126
127
        # todo: optional, parameter checking logic (if applicable) should happen here
128
        # if writes derived values to self, should *not* overwrite self.parama etc
129
        # instead, write to self._parama, self._newparam (starting with _)
130
131
    # todo: implement this, mandatory
132
    def _paramnames(self):
133
        """Return the parameter names of the search.
134
135
        Returns
136
        -------
137
        list of str
138
            The parameter names of the search parameters.
139
        """
140
        # for every instance, this should return the correct parameter names
141
        # i.e., the maximal set of keys of the dict expected by _score
142
        return ["score_param1", "score_param2"]
143
144
    # optional: implement this to prepare arguments for _run
145
    # the default is all parameters passed to __init__, except ex
146
    def get_search_config(self):
147
        """Get the search configuration.
148
149
        Returns
150
        -------
151
        dict with str keys
152
            The search configuration dictionary.
153
        """
154
        # the default
155
        search_config = super().get_search_config()
156
        search_config["one_more_param"] = 42
157
        return search_config
158
159
    # todo: implement this, mandatory
160
    def _run(self, experiment, **search_config):
161
        """Run the optimization search process.
162
163
        Parameters
164
        ----------
165
        experiment : BaseExperiment
166
            The experiment to optimize parameters for.
167
        search_config : dict with str keys
168
            identical to return of ``get_search_config``.
169
170
        Returns
171
        -------
172
        dict with str keys
173
            The best parameters found during the search.
174
            Must have keys a subset or identical to experiment.paramnames().
175
        """
176
        best_params = {"write_some_logic_to_get": "best_params"}
177
        return best_params
178
179
    # todo: implement this for testing purposes!
180
    #   required to run local automated unit and integration testing of estimator
181
    #   method should return default parameters, so that a test instance can be created
182
    @classmethod
183
    def get_test_params(cls, parameter_set="default"):
184
        """Return testing parameter settings for the estimator.
185
186
        Parameters
187
        ----------
188
        parameter_set : str, default="default"
189
            Name of the set of test parameters to return, for use in tests. If no
190
            special parameters are defined for a value, will return `"default"` set.
191
            There are currently no reserved values for this type of estimator.
192
193
        Returns
194
        -------
195
        params : dict or list of dict, default = {}
196
            Parameters to create testing instances of the class
197
            Each dict are parameters to construct an "interesting" test instance, i.e.,
198
            `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
199
            `create_test_instance` uses the first (or only) dictionary in `params`
200
        """
201
        # todo: set the testing parameters for the estimators
202
        # Testing parameters can be dictionary or list of dictionaries.
203
        # Testing parameter choice should cover internal cases well.
204
        #   for "simple" extension, ignore the parameter_set argument.
205
        #
206
        # IMPORTANT: all parameter sets must contain an experiment object
207
        # this must be passed here, even if experiment can be left None in __init__
208
        from somewhere import AnotherExperiment, MyExperiment
209
210
        paramset1 = {
211
            "parama": 0,
212
            "paramb": "default",
213
            "paramc": None,
214
            "experiment": MyExperiment("experiment_params"),
215
        }
216
        paramset2 = {
217
            "parama": 1,
218
            "paramb": "foo",
219
            "paramc": 42,
220
            "experiment": AnotherExperiment("another_experiment_params"),
221
        }
222
        return [paramset1, paramset2]
223
224
        # this method can, if required, use:
225
        #   class properties (e.g., inherited); parent class test case
226
        #   imported objects such as estimators from sklearn
227
        # important: all such imports should be *inside get_test_params*, not at the top
228
        #            since imports are used only at testing time
229
        #
230
        # A good parameter set should primarily satisfy two criteria,
231
        #   1. Chosen set of parameters should have a low testing time,
232
        #      ideally in the magnitude of few seconds for the entire test suite.
233
        #       This is vital for the cases where default values result in
234
        #       "big" models which not only increases test time but also
235
        #       run into the risk of test workers crashing.
236
        #   2. There should be a minimum two such parameter sets with different
237
        #      sets of values to ensure a wide range of code coverage is provided.
238
        #
239
        # example 1: specify params as dictionary
240
        # any number of params can be specified
241
        # params = {"est": value0, "parama": value1, "paramb": value2}
242
        #
243
        # example 2: specify params as list of dictionary
244
        # note: Only first dictionary will be used by create_test_instance
245
        # params = [{"est": value1, "parama": value2},
246
        #           {"est": value3, "parama": value4}]
247
        #
248
        # return params
249