Passed
Push — master ( a4a5a4...600197 )
by Simon
02:08
created

tests.test_optimizers   A

Complexity

Total Complexity 13

Size/Duplication

Total Lines 99
Duplicated Lines 10.1 %

Importance

Changes 0
Metric Value
wmc 13
eloc 62
dl 10
loc 99
rs 10
c 0
b 0
f 0

13 Functions

Rating   Name   Duplication   Size   Complexity  
A test_SimulatedAnnealingOptimizer() 0 3 1
A test_StochasticTunnelingOptimizer() 0 3 1
A test_ParticleSwarmOptimizer() 0 3 1
A model() 10 10 1
A test_RandomRestartHillClimbingOptimizer() 0 3 1
A test_RandomAnnealingOptimizer() 0 3 1
A test_HillClimbingOptimizer() 0 3 1
A test_ParallelTemperingOptimizer() 0 3 1
A test_StochasticHillClimbingOptimizer() 0 3 1
A test_RandomSearchOptimizer() 0 3 1
A test_TabuOptimizer() 0 3 1
A test_EvolutionStrategyOptimizer() 0 3 1
A test_BayesianOptimizer() 0 3 1

How to fix   Duplicated Code   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

1
# Author: Simon Blanke
2
# Email: [email protected]
3
# License: MIT License
4
5
from sklearn.datasets import load_iris
6
from sklearn.model_selection import cross_val_score
7
from sklearn.tree import DecisionTreeClassifier
8
from hyperactive import Hyperactive
9
10
data = load_iris()
11
X = data.data
12
y = data.target
13
14
n_iter_0 = 100
15
random_state = 0
16
n_jobs = 1
17
18
19 View Code Duplication
def model(para, X_train, y_train):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
20
    model = DecisionTreeClassifier(
21
        criterion=para["criterion"],
22
        max_depth=para["max_depth"],
23
        min_samples_split=para["min_samples_split"],
24
        min_samples_leaf=para["min_samples_leaf"],
25
    )
26
    scores = cross_val_score(model, X_train, y_train, cv=3)
27
28
    return scores.mean(), model
29
30
31
search_config = {
32
    model: {
33
        "criterion": ["gini", "entropy"],
34
        "max_depth": range(1, 21),
35
        "min_samples_split": range(2, 21),
36
        "min_samples_leaf": range(1, 21),
37
    }
38
}
39
40
41
def test_HillClimbingOptimizer():
42
    opt = Hyperactive(search_config, optimizer="HillClimbing")
43
    opt.fit(X, y)
44
45
46
def test_StochasticHillClimbingOptimizer():
47
    opt = Hyperactive(search_config, optimizer="StochasticHillClimbing")
48
    opt.fit(X, y)
49
50
51
def test_TabuOptimizer():
52
    opt = Hyperactive(search_config, optimizer="TabuSearch")
53
    opt.fit(X, y)
54
55
56
def test_RandomSearchOptimizer():
57
    opt = Hyperactive(search_config, optimizer="RandomSearch")
58
    opt.fit(X, y)
59
60
61
def test_RandomRestartHillClimbingOptimizer():
62
    opt = Hyperactive(search_config, optimizer="RandomRestartHillClimbing")
63
    opt.fit(X, y)
64
65
66
def test_RandomAnnealingOptimizer():
67
    opt = Hyperactive(search_config, optimizer="RandomAnnealing")
68
    opt.fit(X, y)
69
70
71
def test_SimulatedAnnealingOptimizer():
72
    opt = Hyperactive(search_config, optimizer="SimulatedAnnealing")
73
    opt.fit(X, y)
74
75
76
def test_StochasticTunnelingOptimizer():
77
    opt = Hyperactive(search_config, optimizer="StochasticTunneling")
78
    opt.fit(X, y)
79
80
81
def test_ParallelTemperingOptimizer():
82
    opt = Hyperactive(search_config, optimizer="ParallelTempering")
83
    opt.fit(X, y)
84
85
86
def test_ParticleSwarmOptimizer():
87
    opt = Hyperactive(search_config, optimizer="ParticleSwarm")
88
    opt.fit(X, y)
89
90
91
def test_EvolutionStrategyOptimizer():
92
    opt = Hyperactive(search_config, optimizer="EvolutionStrategy")
93
    opt.fit(X, y)
94
95
96
def test_BayesianOptimizer():
97
    opt = Hyperactive(search_config, optimizer="Bayesian")
98
    opt.fit(X, y)
99