Completed
Push — master ( 39f179...514f8f )
by Tinghui
01:17
created

SoftmaxLayer   A

Complexity

Total Complexity 5

Size/Duplication

Total Lines 50
Duplicated Lines 100 %

Importance

Changes 1
Bugs 0 Features 0
Metric Value
c 1
b 0
f 0
dl 50
loc 50
rs 10
wmc 5

1 Method

Rating   Name   Duplication   Size   Complexity  
B __init__() 28 28 5

How to fix   Duplicated Code   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

1
import math
2
import tensorflow as tf
3
from . import variable_summary
4
5
6 View Code Duplication
class HiddenLayer:
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
7
    """ Typical hidden layer for Multi-layer perceptron
8
    User is allowed to specify the non-linearity activation function.
9
10
    Args:
11
        n_in (:obj:`int`): Number of input cells.
12
        n_out (:obj:`int`): Number of output cells.
13
        name (:obj:`str`): Name of the hidden layer.
14
        x (:class:`tensorflow.placeholder`): Input tensor.
15
        W (:class:`tensorflow.Variable`): Weight matrix.
16
        b (:class:`tensorflow.Variable`): Bias matrix.
17
        activation_fn: Activation function used in this hidden layer.
18
           Common values   :method:`tensorflow.sigmoid` for ``sigmoid`` function, :method:`tensorflow.tanh` for ``tanh``
19
           function, :method:`tensorflow.relu` for RELU.
20
21
    Attributes:
22
        n_in (:obj:`int`): Number of inputs into this layer.
23
        n_out (:obj:`int`): Number of outputs out of this layer.
24
        name (:obj:`str`): Name of the hidden layer.
25
        x (:class:`tensorflow.placeholder`): Tensorflow placeholder or tensor that represents the input of this layer.
26
        W (:class:`tensorflow.Variable`): Weight matrix of current layer.
27
        b (:class:`tensorflow.Variable`): Bias matrix of current layer.
28
        variables (:obj:`list` of :class:`tensorflow.Variable`): variables of current layer.
29
        logits (:obj:`tensorflow.Tensor`): Tensorflow tensor of linear logits computed in current layer.
30
        y (:class:`tensorflow.Tensor`): Tensorflow tensor represents the output function of this layer.
31
        summaries (:obj:`list`): List of Tensorflow summary buffer.
32
    """
33
    def __init__(self, n_in, n_out, name, x=None, W=None, b=None, activation_fn=tf.sigmoid):
34
        self.n_in = n_in
35
        self.n_out = n_out
36
        self.name = name
37
        with tf.name_scope(name):
38
            if x is None:
39
                self.x = tf.placeholder(tf.float32, shape=[None, n_in])
40
            else:
41
                self.x = x
42
            if W is None:
43
                self.W = tf.Variable(
44
                    tf.truncated_normal(shape=[n_in, n_out],stddev=1.0/math.sqrt(float(n_in))),
45
                    name='weights'
46
                )
47
            else:
48
                self.W = W
49
            if b is None:
50
                self.b = tf.Variable(tf.zeros(shape=[n_out]), name='biases')
51
            else:
52
                self.b = b
53
            self.variables = [self.W, self.b]
54
            self.logits = tf.matmul(self.x, self.W) + self.b
55
            self.y = activation_fn(self.logits, name='activations')
56
            self.summaries = []
57
            self.summaries += variable_summary(self.W, tag=name + '/weights')
58
            self.summaries += variable_summary(self.b, tag=name + '/bias')
59
            self.summaries.append(tf.summary.histogram(name + '/pre_act', self.logits))
60
            self.summaries.append(tf.summary.histogram(name + '/act', self.y))
61
62
63 View Code Duplication
class SoftmaxLayer:
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
64
    """ Softmax Layer as multi-class binary classification output layer
65
66
    Parameters:
67
        n_in (:obj:`int`): Number of input cells.
68
        n_out (:obj:`int`): Number of output cells.
69
        name (:obj:`str`): Name of the layer.
70
        x (:class:`tensorflow.placeholder`): Input tensor.
71
        W (:class:`tensorflow.Variable`): Weight matrix.
72
        b (:class:`tensorflow.Variable`): Bias matrix.
73
74
    Attributes:
75
        n_in (:obj:`int`): Number of inputs into this layer.
76
        n_out (:obj:`int`): Number of outputs out of this layer.
77
        name (:obj:`str`): Name of the hidden layer.
78
        x (:class:`tensorflow.placeholder`): Tensorflow placeholder or tensor that represents the input of this layer.
79
        W (:class:`tensorflow.Variable`): Weight matrix of current layer.
80
        b (:class:`tensorflow.Variable`): Bias matrix of current layer.
81
        variables (:obj:`list` of :class:`tensorflow.Variable`): variables of current layer.
82
        logits (:obj:`tensorflow.Tensor`): Tensorflow tensor of linear logits computed in current layer.
83
        y (:class:`tensorflow.Tensor`): Tensorflow tensor represents the output function of this layer.
84
    """
85
    def __init__(self, n_in, n_out, name, x=None, W=None, b=None):
86
        self.n_in = n_in
87
        self.n_out = n_out
88
        with tf.name_scope(name):
89
            if x is None:
90
                self.x = tf.placeholder(tf.float32, shape=[None, n_in], name='input-x')
91
            else:
92
                self.x = x
93
            if W is None:
94
                self.W = tf.Variable(
95
                    tf.truncated_normal(shape=[n_in, n_out],stddev=1.0/math.sqrt(float(n_in))),
96
                    name='weights'
97
                )
98
            else:
99
                self.W = W
100
            if b is None:
101
                self.b = tf.Variable(tf.zeros(shape=[n_out]), name='biases')
102
            else:
103
                self.b = b
104
            self.variables = [self.W, self.b]
105
            self.logits = tf.matmul(self.x, self.W) + self.b
106
            self.name = name
107
            self.y = tf.nn.softmax(self.logits, name='softmax')
108
            self.summaries = []
109
            self.summaries += variable_summary(self.W, tag=name + '/weights')
110
            self.summaries += variable_summary(self.b, tag=name + '/bias')
111
            self.summaries.append(tf.summary.histogram(name + '/pre_act', self.logits))
112
            self.summaries.append(tf.summary.histogram(name + '/act', self.y))
113
114
115
class AutoencoderLayer(HiddenLayer):
116
    """Autoencoder Layer
117
118
    Auto-encoder inherits hidden layer for feed-forward calculation, and adds self encoding
119
    tensor for unsupervised pre-training.
120
121
    Args:
122
        n_in (:obj:`int`): Number of input cells.
123
        n_out (:obj:`int`): Number of output cells.
124
        name (:obj:`str`): Name of the hidden layer.
125
        x (:class:`tensorflow.placeholder`): Input tensor.
126
        W (:class:`tensorflow.Variable`): Weight matrix.
127
        b (:class:`tensorflow.Variable`): Bias matrix.
128
        shared_weights (:obj:`bool`): If weights is shared between encoding and decoding.
129
130
    Attributes:
131
        n_in (:obj:`int`): Number of inputs into this layer.
132
        n_out (:obj:`int`): Number of outputs out of this layer.
133
        name (:obj:`str`): Name of the hidden layer.
134
        x (:class:`tensorflow.placeholder`): Tensorflow placeholder or tensor that represents the input of this layer.
135
        W (:class:`tensorflow.Variable`): Weight matrix used in encoding.
136
        b (:class:`tensorflow.Variable`): Bias matrix in encoding.
137
        W_prime (:obj:`tensorflow.Tensor`): Weight matrix used in self-decoding process. If weights are shared, it
138
            equals to transpose of encoding weight matrix.
139
        b_prime (:obj:`tensorflow.Tensor`): Bias matrix used in self-decoding process.
140
        variables (:obj:`list` of :class:`tensorflow.Variable`): variables of current layer.
141
        logits (:obj:`tensorflow.Tensor`): Tensorflow tensor of linear logits computed after encoding.
142
143
        y (:class:`tensorflow.Tensor`): Tensorflow tensor represents the output function of this layer.
144
        summaries (:obj:`list`): List of Tensorflow summary buffer.
145
    """
146
    def __init__(self, n_in, n_out, name, x=None, W=None, b=None, shared_weights=True):
147
        super().__init__(n_in, n_out, name, x, W, b, tf.sigmoid)
148
        self.b_prime = tf.Variable(tf.zeros(shape=[n_in]), name='biases_prime')
149
        self.variables.append(self.b_prime)
150
        if shared_weights:
151
            self.W_prime = tf.transpose(self.W)
152
        else:
153
            self.W_prime = tf.Variable(
154
                tf.truncated_normal(shape=[n_out, n_in], stddev=1.0 / math.sqrt(float(n_in))),
155
                name='weights_prime'
156
            )
157
            self.variables.append(self.W_prime)
158
        self.encode_logit = tf.matmul(self.y, self.W_prime) + self.b_prime
159
        self.encode = tf.sigmoid(self.encode_logit)
160
        self.encode_loss = tf.reduce_mean(tf.pow(self.x - self.encode, 2))
161
        self.summaries.append(tf.summary.scalar(name+'/ae_rmse', self.encode_loss))
162
        self.merged = tf.summary.merge(self.summaries)
163