Conditions | 20 |
Total Lines | 58 |
Lines | 0 |
Ratio | 0 % |
Changes | 3 | ||
Bugs | 0 | Features | 0 |
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
Complex classes like convert_to_theano_var() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.
Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.
1 | #!/usr/bin/env python |
||
7 | def convert_to_theano_var(obj): |
||
8 | """ |
||
9 | Convert neural vars to theano vars. |
||
10 | :param obj: NeuralVariable or list or dict or tuple |
||
11 | :return: theano var, test var, tensor found, neural var found |
||
12 | """ |
||
13 | from deepy.core.neural_var import NeuralVariable |
||
14 | if type(obj) == tuple: |
||
15 | return tuple(convert_to_theano_var(list(obj))) |
||
16 | if type(obj) == list: |
||
17 | unpacked_list = map(convert_to_theano_var, obj) |
||
18 | normal_list = [] |
||
19 | test_list = [] |
||
20 | theano_var_found = False |
||
21 | neural_var_found = False |
||
22 | for normal_var, tensor_found, neural_found in unpacked_list: |
||
23 | normal_list.append(normal_var) |
||
24 | if tensor_found: theano_var_found = True |
||
25 | if neural_found: neural_var_found = True |
||
26 | return normal_list, theano_var_found, neural_var_found |
||
27 | elif type(obj) == dict: |
||
28 | normal_map = {} |
||
29 | theano_var_found = False |
||
30 | neural_var_found = False |
||
31 | for key in obj: |
||
32 | normal_var, tensor_found, neural_found = convert_to_theano_var(obj[key]) |
||
33 | normal_map[key] = normal_var |
||
34 | if tensor_found: theano_var_found = True |
||
35 | if neural_found: neural_var_found = True |
||
36 | return normal_map, theano_var_found, neural_var_found |
||
37 | elif type(obj) == MapDict: |
||
38 | normal_map = {} |
||
39 | theano_var_found = False |
||
40 | neural_var_found = False |
||
41 | for key in obj: |
||
42 | normal_var, tensor_found, neural_found = convert_to_theano_var(obj[key]) |
||
43 | normal_map[key] = normal_var |
||
44 | if tensor_found: theano_var_found = True |
||
45 | if neural_found: neural_var_found = True |
||
46 | return MapDict(normal_map), theano_var_found, neural_var_found |
||
47 | elif type(obj) == NeuralVariable: |
||
48 | theano_tensor = obj.tensor |
||
49 | theano_tensor.tag.last_dim = obj.dim() |
||
50 | return theano_tensor, False, True |
||
51 | elif type(obj) == TensorVariable: |
||
52 | return obj, True, False |
||
53 | elif type(obj) == slice: |
||
54 | normal_args = [] |
||
55 | theano_var_found = False |
||
56 | neural_var_found = False |
||
57 | for arg in [obj.start, obj.stop, obj.step]: |
||
58 | normal_var, tensor_found, neural_found = convert_to_theano_var(arg) |
||
59 | normal_args.append(normal_var) |
||
60 | if tensor_found: theano_var_found = True |
||
61 | if neural_found: neural_var_found = True |
||
62 | return slice(*normal_args), theano_var_found, neural_var_found |
||
63 | else: |
||
64 | return obj, False, False |
||
65 | |||
132 | return neural_computation(original_func, prefer_tensor=True) |