Passed
Push — master ( d2a11f...643512 )
by Chris
01:59 queued 12s
created

abydos.distance._guth.Guth.sim()   F

Complexity

Conditions 33

Size

Total Lines 100
Code Lines 51

Duplication

Lines 0
Ratio 0 %

Code Coverage

Tests 51
CRAP Score 33

Importance

Changes 0
Metric Value
eloc 51
dl 0
loc 100
ccs 51
cts 51
cp 1
rs 0
c 0
b 0
f 0
cc 33
nop 3
crap 33

How to fix   Long Method    Complexity   

Long Method

Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.

For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.

Commonly applied refactorings include:

Complexity

Complex classes like abydos.distance._guth.Guth.sim() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
# -*- coding: utf-8 -*-
2
3
# Copyright 2019 by Christopher C. Little.
4
# This file is part of Abydos.
5
#
6
# Abydos is free software: you can redistribute it and/or modify
7
# it under the terms of the GNU General Public License as published by
8
# the Free Software Foundation, either version 3 of the License, or
9
# (at your option) any later version.
10
#
11
# Abydos is distributed in the hope that it will be useful,
12
# but WITHOUT ANY WARRANTY; without even the implied warranty of
13
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14
# GNU General Public License for more details.
15
#
16
# You should have received a copy of the GNU General Public License
17
# along with Abydos. If not, see <http://www.gnu.org/licenses/>.
18
19 1
"""abydos.distance._guth.
20
21
Guth matching algorithm
22
"""
23
24 1
from __future__ import (
25
    absolute_import,
26
    division,
27
    print_function,
28
    unicode_literals,
29
)
30
31 1
from ._distance import _Distance
32 1
from ..tokenizer import QGrams
33
34 1
__all__ = ['Guth']
35
36
37 1
class Guth(_Distance):
38
    r"""Guth matching.
39
40
    Guth matching :cite:`Guth:1976` uses a simple positional matching rule list
41
    to determine whether two names match. Following the original, the
42
    :meth:`.sim_score` method returns only 1.0 for matching or 0.0 for
43
    non-matching.
44
45
    The :math:`.sim` mathod instead penalizes more distant matches and never
46
    outrightly declares two names a non-matching unless no matches can be made
47
    in the two strings.
48
49
    Tokens other than single characters can be matched by specifying a
50
    tokenizer during initialization or setting the qval parameter.
51
52
    .. versionadded:: 0.4.1
53
    """
54
55 1
    def __init__(self, tokenizer=None, **kwargs):
56
        """Initialize Guth instance.
57
58
        Parameters
59
        ----------
60
        tokenizer : _Tokenizer
61
            A tokenizer instance from the :py:mod:`abydos.tokenizer` package
62
        **kwargs
63
            Arbitrary keyword arguments
64
65
        Other Parameters
66
        ----------------
67
        qval : int
68
            The length of each q-gram. Using this parameter and tokenizer=None
69
            will cause the instance to use the QGram tokenizer with this
70
            q value.
71
72
73
        .. versionadded:: 0.4.1
74
75
        """
76 1
        super(Guth, self).__init__(**kwargs)
77
78 1
        self.params['tokenizer'] = tokenizer
79 1
        if 'qval' in self.params:
80 1
            self.params['tokenizer'] = QGrams(
81
                qval=self.params['qval'], start_stop='$#', skip=0, scaler=None
82
            )
83
84 1
    def _token_at(self, name, pos):
85
        """Return the token of name at position pos.
86
87
        Parameters
88
        ----------
89
        name : str or list
90
            A string (or list) from which to return a token
91
        pos : int
92
            The position of the token to return
93
94
        Returns
95
        -------
96
        str
97
            The requested token or None if the position is invalid
98
99
100
        .. versionadded:: 0.4.1
101
102
        """
103 1
        if pos < 0:
104 1
            return None
105 1
        if pos >= len(name):
106 1
            return None
107 1
        return name[pos]
108
109 1
    def sim_score(self, src, tar):
110
        """Return the Guth matching score of two strings.
111
112
        Parameters
113
        ----------
114
        src : str
115
            Source string for comparison
116
        tar : str
117
            Target string for comparison
118
119
        Returns
120
        -------
121
        float
122
            Guth matching score (1.0 if matching, otherwise 0.0)
123
124
        Examples
125
        --------
126
        >>> cmp = Guth()
127
        >>> cmp.sim_score('cat', 'hat')
128
        1.0
129
        >>> cmp.sim_score('Niall', 'Neil')
130
        1.0
131
        >>> cmp.sim_score('aluminum', 'Catalan')
132
        0.0
133
        >>> cmp.sim_score('ATCG', 'TAGC')
134
        1.0
135
136
137
        .. versionadded:: 0.4.1
138
139
        """
140 1
        if src == tar:
141 1
            return 1.0
142 1
        if not src or not tar:
143 1
            return 0.0
144
145 1
        if self.params['tokenizer']:
146 1
            src = self.params['tokenizer'].tokenize(src).get_list()
147 1
            tar = self.params['tokenizer'].tokenize(tar).get_list()
148
149 1
        for pos in range(len(src)):
150 1
            s = self._token_at(src, pos)
151 1
            t = set(tar[max(0, pos - 1) : pos + 3])
152 1
            if s and s in t:
153 1
                continue
154
155 1
            s = set(src[max(0, pos - 1) : pos + 3])
156 1
            t = self._token_at(tar, pos)
157 1
            if t and t in s:
158 1
                continue
159
160 1
            s = self._token_at(src, pos + 1)
161 1
            t = self._token_at(tar, pos + 1)
162 1
            if s and t and s == t:
163 1
                continue
164
165 1
            s = self._token_at(src, pos + 2)
166 1
            t = self._token_at(tar, pos + 2)
167 1
            if s and t and s == t:
168 1
                continue
169
170 1
            break
171
        else:
172 1
            return 1.0
173 1
        return 0.0
174
175 1
    def sim(self, src, tar):
176
        """Return the relative Guth similarity of two strings.
177
178
        This deviates from the algorithm described in :cite:`Guth:1976` in that
179
        more distant matches are penalized, so that less similar terms score
180
        lower that more similar terms.
181
182
        If no match is found for a particular token in the source string, this
183
        does not result in an automatic 0.0 score. Rather, the score is further
184
        penalized towards 0.0.
185
186
        Parameters
187
        ----------
188
        src : str
189
            Source string for comparison
190
        tar : str
191
            Target string for comparison
192
193
        Returns
194
        -------
195
        float
196
            Relative Guth matching score
197
198
        Examples
199
        --------
200
        >>> cmp = Guth()
201
        >>> cmp.sim('cat', 'hat')
202
        0.8666666666666667
203
        >>> cmp.sim('Niall', 'Neil')
204
        0.8800000000000001
205
        >>> cmp.sim('aluminum', 'Catalan')
206
        0.4
207
        >>> cmp.sim('ATCG', 'TAGC')
208
        0.8
209
210
211
        .. versionadded:: 0.4.1
212
213
        """
214 1
        if src == tar:
215 1
            return 1.0
216 1
        if not src or not tar:
217 1
            return 0.0
218
219 1
        if self.params['tokenizer']:
220 1
            src = self.params['tokenizer'].tokenize(src).get_list()
221 1
            tar = self.params['tokenizer'].tokenize(tar).get_list()
222
223 1
        score = 0
224 1
        for pos in range(len(src)):
225 1
            s = self._token_at(src, pos)
226 1
            t = self._token_at(tar, pos)
227 1
            if s and t and s == t:
228 1
                score += 1.0
229 1
                continue
230
231 1
            t = self._token_at(tar, pos + 1)
232 1
            if s and t and s == t:
233 1
                score += 0.8
234 1
                continue
235
236 1
            t = self._token_at(tar, pos + 2)
237 1
            if s and t and s == t:
238 1
                score += 0.6
239 1
                continue
240
241 1
            t = self._token_at(tar, pos - 1)
242 1
            if s and t and s == t:
243 1
                score += 0.8
244 1
                continue
245
246 1
            s = self._token_at(src, pos - 1)
247 1
            t = self._token_at(tar, pos)
248 1
            if s and t and s == t:
249 1
                score += 0.8
250 1
                continue
251
252 1
            s = self._token_at(src, pos + 1)
253 1
            if s and t and s == t:
254 1
                score += 0.8
255 1
                continue
256
257 1
            s = self._token_at(src, pos + 2)
258 1
            if s and t and s == t:
259 1
                score += 0.6
260 1
                continue
261
262 1
            s = self._token_at(src, pos + 1)
263 1
            t = self._token_at(tar, pos + 1)
264 1
            if s and t and s == t:
265 1
                score += 0.6
266 1
                continue
267
268 1
            s = self._token_at(src, pos + 2)
269 1
            t = self._token_at(tar, pos + 2)
270 1
            if s and t and s == t:
271 1
                score += 0.2
272 1
                continue
273
274 1
        return score / len(src)
275
276
277
if __name__ == '__main__':
278
    import doctest
279
280
    doctest.testmod()
281