| Conditions | 14 |
| Total Lines | 69 |
| Code Lines | 55 |
| Lines | 0 |
| Ratio | 0 % |
| Changes | 0 | ||
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
Complex classes like NiaPy.runner.Runner.__export_to_latex() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.
Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.
| 1 | # encoding=utf8 |
||
| 152 | Raises: |
||
| 153 | TypeError: Raises TypeError if export type is not supported |
||
| 154 | |||
| 155 | Returns: |
||
| 156 | dict: Returns dictionary of results |
||
| 157 | |||
| 158 | See Also: |
||
| 159 | * :func:`NiaPy.Runner.useAlgorithms` |
||
| 160 | * :func:`NiaPy.Runner.useBenchmarks` |
||
| 161 | * :func:`NiaPy.Runner.__algorithmFactory` |
||
| 162 | |||
| 163 | """ |
||
| 164 | |||
| 165 | for alg in self.useAlgorithms: |
||
| 166 | if not isinstance(alg, "".__class__): |
||
| 167 | alg_name = str(type(alg).__name__) |
||
| 168 | else: |
||
| 169 | alg_name = alg |
||
| 170 | |||
| 171 | self.results[alg_name] = {} |
||
| 172 | |||
| 173 | if verbose: |
||
| 174 | logger.info("Running %s...", alg_name) |
||
| 175 | |||
| 176 | for bench in self.useBenchmarks: |
||
| 177 | if not isinstance(bench, "".__class__): |
||
| 178 | bench_name = str(type(bench).__name__) |
||
| 179 | else: |
||
| 180 | bench_name = bench |
||
| 181 | |||
| 182 | if verbose: |
||
| 183 | logger.info("Running %s algorithm on %s benchmark...", alg_name, bench_name) |
||
| 184 | |||
| 185 | self.results[alg_name][bench_name] = [] |
||
| 186 | for _ in range(self.nRuns): |
||
| 187 | algorithm = AlgorithmUtility().get_algorithm(alg) |
||
| 188 | benchmark_stopping_task = self.benchmark_factory(bench) |
||
| 189 | self.results[alg_name][bench_name].append(algorithm.run(benchmark_stopping_task)) |
||
| 190 | if verbose: |
||
| 191 | logger.info("---------------------------------------------------") |
||
| 192 | if export == "dataframe": |
||
| 193 | self.__export_to_dataframe_pickle() |
||
| 194 | elif export == "json": |
||
| 195 | self.__export_to_json() |
||
| 196 | elif export == "xsl": |
||
| 197 | self._export_to_xls() |
||
| 198 | elif export == "xlsx": |
||
| 199 | self.__export_to_xlsx() |
||
| 200 | else: |
||
| 201 | raise TypeError("Passed export type %s is not supported!", export) |
||
| 202 | return self.results |
||
| 203 |