Passed
Push — master ( 331d4b...653c7c )
by Arkadiusz
02:19
created

src/Phpml/Helper/Optimizer/ConjugateGradient.php (1 issue)

Upgrade to new PHP Analysis Engine

These results are based on our legacy PHP analysis, consider migrating to our new PHP analysis engine instead. Learn more

1
<?php
2
3
declare(strict_types=1);
4
5
namespace Phpml\Helper\Optimizer;
6
7
/**
8
 * Conjugate Gradient method to solve a non-linear f(x) with respect to unknown x
9
 * See https://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method)
10
 *
11
 * The method applied below is explained in the below document in a practical manner
12
 *  - http://web.cs.iastate.edu/~cs577/handouts/conjugate-gradient.pdf
13
 *
14
 * However it is compliant with the general Conjugate Gradient method with
15
 * Fletcher-Reeves update method. Note that, the f(x) is assumed to be one-dimensional
16
 * and one gradient is utilized for all dimensions in the given data.
17
 */
18
class ConjugateGradient extends GD
19
{
20
    public function runOptimization(array $samples, array $targets, \Closure $gradientCb) : array
21
    {
22
        $this->samples = $samples;
23
        $this->targets = $targets;
24
        $this->gradientCb = $gradientCb;
25
        $this->sampleCount = count($samples);
26
        $this->costValues = [];
27
28
        $d = mp::muls($this->gradient($this->theta), -1);
29
30
        for ($i = 0; $i < $this->maxIterations; ++$i) {
31
            // Obtain α that minimizes f(θ + α.d)
32
            $alpha = $this->getAlpha(array_sum($d));
33
34
            // θ(k+1) = θ(k) + α.d
35
            $thetaNew = $this->getNewTheta($alpha, $d);
36
37
            // β = ||∇f(x(k+1))||²  ∕  ||∇f(x(k))||²
38
            $beta = $this->getBeta($thetaNew);
39
40
            // d(k+1) =–∇f(x(k+1)) + β(k).d(k)
41
            $d = $this->getNewDirection($thetaNew, $beta, $d);
42
43
            // Save values for the next iteration
44
            $oldTheta = $this->theta;
45
            $this->costValues[] = $this->cost($thetaNew);
46
47
            $this->theta = $thetaNew;
48
            if ($this->enableEarlyStop && $this->earlyStop($oldTheta)) {
49
                break;
50
            }
51
        }
52
53
        $this->clear();
54
55
        return $this->theta;
56
    }
57
58
    /**
59
     * Executes the callback function for the problem and returns
60
     * sum of the gradient for all samples & targets.
61
     */
62
    protected function gradient(array $theta) : array
63
    {
64
        [, $gradient] = parent::gradient($theta);
65
66
        return $gradient;
67
    }
68
69
    /**
70
     * Returns the value of f(x) for given solution
71
     */
72
    protected function cost(array $theta) : float
73
    {
74
        [$cost] = parent::gradient($theta);
0 ignored issues
show
Comprehensibility Bug introduced by
It seems like you call parent on a different method (gradient() instead of cost()). Are you sure this is correct? If so, you might want to change this to $this->gradient().

This check looks for a call to a parent method whose name is different than the method from which it is called.

Consider the following code:

class Daddy
{
    protected function getFirstName()
    {
        return "Eidur";
    }

    protected function getSurName()
    {
        return "Gudjohnsen";
    }
}

class Son
{
    public function getFirstName()
    {
        return parent::getSurname();
    }
}

The getFirstName() method in the Son calls the wrong method in the parent class.

Loading history...
75
76
        return array_sum($cost) / $this->sampleCount;
77
    }
78
79
    /**
80
     * Calculates alpha that minimizes the function f(θ + α.d)
81
     * by performing a line search that does not rely upon the derivation.
82
     *
83
     * There are several alternatives for this function. For now, we
84
     * prefer a method inspired from the bisection method for its simplicity.
85
     * This algorithm attempts to find an optimum alpha value between 0.0001 and 0.01
86
     *
87
     * Algorithm as follows:
88
     *  a) Probe a small alpha  (0.0001) and calculate cost function
89
     *  b) Probe a larger alpha (0.01) and calculate cost function
90
     *		b-1) If cost function decreases, continue enlarging alpha
91
     *		b-2) If cost function increases, take the midpoint and try again
92
     */
93
    protected function getAlpha(float $d) : float
94
    {
95
        $small = 0.0001 * $d;
96
        $large = 0.01 * $d;
97
98
        // Obtain θ + α.d for two initial values, x0 and x1
99
        $x0 = mp::adds($this->theta, $small);
100
        $x1 = mp::adds($this->theta, $large);
101
102
        $epsilon = 0.0001;
103
        $iteration = 0;
104
        do {
105
            $fx1 = $this->cost($x1);
106
            $fx0 = $this->cost($x0);
107
108
            // If the difference between two values is small enough
109
            // then break the loop
110
            if (abs($fx1 - $fx0) <= $epsilon) {
111
                break;
112
            }
113
114
            if ($fx1 < $fx0) {
115
                $x0 = $x1;
116
                $x1 = mp::adds($x1, 0.01); // Enlarge second
117
            } else {
118
                $x1 = mp::divs(mp::add($x1, $x0), 2.0);
119
            } // Get to the midpoint
120
121
            $error = $fx1 / $this->dimensions;
122
        } while ($error <= $epsilon || $iteration++ < 10);
123
124
        //  Return α = θ / d
125
        if ($d == 0) {
126
            return $x1[0] - $this->theta[0];
127
        }
128
129
        return ($x1[0] - $this->theta[0]) / $d;
130
    }
131
132
    /**
133
     * Calculates new set of solutions with given alpha (for each θ(k)) and
134
     * gradient direction.
135
     *
136
     * θ(k+1) = θ(k) + α.d
137
     */
138
    protected function getNewTheta(float $alpha, array $d) : array
139
    {
140
        $theta = $this->theta;
141
142
        for ($i = 0; $i < $this->dimensions + 1; ++$i) {
143
            if ($i === 0) {
144
                $theta[$i] += $alpha * array_sum($d);
145
            } else {
146
                $sum = 0.0;
147
                foreach ($this->samples as $si => $sample) {
148
                    $sum += $sample[$i - 1] * $d[$si] * $alpha;
149
                }
150
151
                $theta[$i] += $sum;
152
            }
153
        }
154
155
        return $theta;
156
    }
157
158
    /**
159
     * Calculates new beta (β) for given set of solutions by using
160
     * Fletcher–Reeves method.
161
     *
162
     * β = ||f(x(k+1))||²  ∕  ||f(x(k))||²
163
     *
164
     * See:
165
     *  R. Fletcher and C. M. Reeves, "Function minimization by conjugate gradients", Comput. J. 7 (1964), 149–154.
166
     */
167
    protected function getBeta(array $newTheta) : float
168
    {
169
        $dNew = array_sum($this->gradient($newTheta));
170
        $dOld = array_sum($this->gradient($this->theta)) + 1e-100;
171
172
        return  $dNew ** 2 / $dOld ** 2;
173
    }
174
175
    /**
176
     * Calculates the new conjugate direction
177
     *
178
     * d(k+1) =–∇f(x(k+1)) + β(k).d(k)
179
     */
180
    protected function getNewDirection(array $theta, float $beta, array $d) : array
181
    {
182
        $grad = $this->gradient($theta);
183
184
        return mp::add(mp::muls($grad, -1), mp::muls($d, $beta));
185
    }
186
}
187
188
/**
189
 * Handles element-wise vector operations between vector-vector
190
 * and vector-scalar variables
191
 */
192
class mp
193
{
194
    /**
195
     * Element-wise <b>multiplication</b> of two vectors of the same size
196
     */
197
    public static function mul(array $m1, array $m2) : array
198
    {
199
        $res = [];
200
        foreach ($m1 as $i => $val) {
201
            $res[] = $val * $m2[$i];
202
        }
203
204
        return $res;
205
    }
206
207
    /**
208
     * Element-wise <b>division</b> of two vectors of the same size
209
     */
210
    public static function div(array $m1, array $m2) : array
211
    {
212
        $res = [];
213
        foreach ($m1 as $i => $val) {
214
            $res[] = $val / $m2[$i];
215
        }
216
217
        return $res;
218
    }
219
220
    /**
221
     * Element-wise <b>addition</b> of two vectors of the same size
222
     */
223
    public static function add(array $m1, array $m2, int $mag = 1) : array
224
    {
225
        $res = [];
226
        foreach ($m1 as $i => $val) {
227
            $res[] = $val + $mag * $m2[$i];
228
        }
229
230
        return $res;
231
    }
232
233
    /**
234
     * Element-wise <b>subtraction</b> of two vectors of the same size
235
     */
236
    public static function sub(array $m1, array $m2) : array
237
    {
238
        return self::add($m1, $m2, -1);
239
    }
240
241
    /**
242
     * Element-wise <b>multiplication</b> of a vector with a scalar
243
     */
244
    public static function muls(array $m1, float $m2) : array
245
    {
246
        $res = [];
247
        foreach ($m1 as $val) {
248
            $res[] = $val * $m2;
249
        }
250
251
        return $res;
252
    }
253
254
    /**
255
     * Element-wise <b>division</b> of a vector with a scalar
256
     */
257
    public static function divs(array $m1, float $m2) : array
258
    {
259
        $res = [];
260
        foreach ($m1 as $val) {
261
            $res[] = $val / ($m2 + 1e-32);
262
        }
263
264
        return $res;
265
    }
266
267
    /**
268
     * Element-wise <b>addition</b> of a vector with a scalar
269
     */
270
    public static function adds(array $m1, float $m2, int $mag = 1) : array
271
    {
272
        $res = [];
273
        foreach ($m1 as $val) {
274
            $res[] = $val + $mag * $m2;
275
        }
276
277
        return $res;
278
    }
279
280
    /**
281
     * Element-wise <b>subtraction</b> of a vector with a scalar
282
     */
283
    public static function subs(array $m1, float $m2) : array
284
    {
285
        return self::adds($m1, $m2, -1);
286
    }
287
}
288