Passed
Push — 2.x ( 9aebce...13ead5 )
by Ramon
07:01
created

senaite.core.datamanagers.field.sample_analyses   F

Complexity

Total Complexity 61

Size/Duplication

Total Lines 420
Duplicated Lines 4.29 %

Importance

Changes 0
Metric Value
wmc 61
eloc 215
dl 18
loc 420
rs 3.52
c 0
b 0
f 0

15 Methods

Rating   Name   Duplication   Size   Complexity  
A SampleAnalysesFieldDataManager.get() 0 25 2
A SampleAnalysesFieldDataManager.__init__() 0 4 1
A SampleAnalysesFieldDataManager.remove_analysis() 0 31 5
C SampleAnalysesFieldDataManager.set() 0 76 11
A SampleAnalysesFieldDataManager.resolve_range() 0 18 4
A SampleAnalysesFieldDataManager.get_from_ancestor() 0 9 2
A SampleAnalysesFieldDataManager.resolve_conditions() 0 28 3
A SampleAnalysesFieldDataManager.get_analyses_from_descendants() 0 7 2
A SampleAnalysesFieldDataManager._to_service() 0 31 5
A SampleAnalysesFieldDataManager.resolve_specs() 0 21 5
A SampleAnalysesFieldDataManager.resolve_uid() 18 18 5
A SampleAnalysesFieldDataManager.get_from_descendant() 0 15 3
A SampleAnalysesFieldDataManager.get_from_instance() 0 8 2
B SampleAnalysesFieldDataManager.add_analysis() 0 59 7
A SampleAnalysesFieldDataManager.resolve_analyses() 0 32 4

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like senaite.core.datamanagers.field.sample_analyses often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
# -*- coding: utf-8 -*-
2
3
import itertools
4
5
from AccessControl import Unauthorized
6
from bika.lims import api
7
from bika.lims import logger
8
from bika.lims.api.security import check_permission
9
from bika.lims.interfaces import IAnalysis
10
from bika.lims.interfaces import IAnalysisService
11
from bika.lims.interfaces import ISubmitted
12
from bika.lims.utils.analysis import create_analysis
13
from senaite.core.catalog import ANALYSIS_CATALOG
14
from senaite.core.catalog import SETUP_CATALOG
15
from senaite.core.datamanagers.base import FieldDataManager
16
from senaite.core.permissions import AddAnalysis
17
18
DETACHED_STATES = ["cancelled", "retracted", "rejected"]
19
20
21
class SampleAnalysesFieldDataManager(FieldDataManager):
22
    """Data Manager for Routine Analyses
23
    """
24
    def __init__(self, context, request, field):
25
        self.context = context
26
        self.request = request
27
        self.field = field
28
29
    def get(self, **kw):
30
        """Returns a list of Analyses assigned to this AR
31
32
        Return a list of catalog brains unless `full_objects=True` is passed.
33
        Other keyword arguments are passed to senaite_catalog_analysis
34
35
        :param instance: Analysis Request object
36
        :param kwargs: Keyword arguments to inject in the search query
37
        :returns: A list of Analysis Objects/Catalog Brains
38
        """
39
        # Filter out parameters from kwargs that don't match with indexes
40
        catalog = api.get_tool(ANALYSIS_CATALOG)
41
        indexes = catalog.indexes()
42
        query = dict([(k, v) for k, v in kw.items() if k in indexes])
43
44
        query["portal_type"] = "Analysis"
45
        query["getAncestorsUIDs"] = api.get_uid(self.context)
46
        query["sort_on"] = kw.get("sort_on", "sortable_title")
47
        query["sort_order"] = kw.get("sort_order", "ascending")
48
49
        # Do the search against the catalog
50
        brains = catalog(query)
51
        if kw.get("full_objects", False):
52
            return map(api.get_object, brains)
53
        return brains
54
55
    def set(self, items, prices, specs, hidden, **kw):
56
        """Set/Assign Analyses to this AR
57
58
        :param items: List of Analysis objects/brains, AnalysisService
59
                      objects/brains and/or Analysis Service uids
60
        :type items: list
61
        :param prices: Mapping of AnalysisService UID -> price
62
        :type prices: dict
63
        :param specs: List of AnalysisService UID -> Result Range mappings
64
        :type specs: list
65
        :param hidden: List of AnalysisService UID -> Hidden mappings
66
        :type hidden: list
67
        :returns: list of new assigned Analyses
68
        """
69
70
        if items is None:
71
            items = []
72
73
        # Bail out if the items is not a list type
74
        if not isinstance(items, (list, tuple)):
75
            raise TypeError(
76
                "Items parameter must be a tuple or list, got '{}'".format(
77
                    type(items)))
78
79
        # Bail out if the AR is inactive
80
        if not api.is_active(self.context):
81
            raise Unauthorized("Inactive ARs can not be modified")
82
83
        # Bail out if the user has not the right permission
84
        if not check_permission(AddAnalysis, self.context):
85
            raise Unauthorized("You do not have the '{}' permission"
86
                               .format(AddAnalysis))
87
88
        # Convert the items to a valid list of AnalysisServices
89
        services = filter(None, map(self._to_service, items))
90
91
        # Calculate dependencies
92
        dependencies = map(lambda s: s.getServiceDependencies(), services)
93
        dependencies = list(itertools.chain.from_iterable(dependencies))
94
95
        # Merge dependencies and services
96
        services = set(services + dependencies)
97
98
        # Modify existing AR specs with new form values of selected analyses
99
        specs = self.resolve_specs(self.context, specs)
100
101
        # Add analyses
102
        params = dict(prices=prices, hidden=hidden, specs=specs)
103
        map(lambda serv: self.add_analysis(self.context, serv, **params), services)
104
105
        # Get all analyses (those from descendants included)
106
        analyses = self.context.objectValues("Analysis")
107
        analyses.extend(self.get_analyses_from_descendants(self.context))
108
109
        # Bail out those not in services list or submitted
110
        uids = map(api.get_uid, services)
111
        to_remove = filter(lambda an: an.getServiceUID() not in uids, analyses)
112
        to_remove = filter(lambda an: not ISubmitted.providedBy(an), to_remove)
113
114
        # Remove analyses
115
        map(self.remove_analysis, to_remove)
116
117
        # Get the uids of the analyses we keep (from descendants included)
118
        analyses_uids = []
119
        skip = dict.fromkeys(to_remove, True)
120
        for analysis in analyses:
121
            uid = analysis.UID()
122
            if skip.get(uid, False):
123
                continue
124
            analyses_uids.append(uid)
125
126
        # Store the uids in instance's attribute for this field
127
        # Note we only store the UIDs of the contained analyses!
128
        contained = self.context.objectValues("Analysis")
129
        contained_uids = [analysis.UID() for analysis in contained]
130
        self.field.setRaw(self.context, contained_uids)
131
132
    def resolve_specs(self, instance, results_ranges):
133
        """Returns a dictionary where the key is the service_uid and the value
134
        is its results range. The dictionary is made by extending the
135
        results_ranges passed-in with the Sample's ResultsRanges (a copy of the
136
        specifications initially set)
137
        """
138
        rrs = results_ranges or []
139
140
        # Sample's Results ranges
141
        sample_rrs = instance.getResultsRange()
142
143
        # Ensure all subfields from specification are kept and missing values
144
        # for subfields are filled in accordance with the specs
145
        rrs = map(lambda rr: self.resolve_range(rr, sample_rrs), rrs)
146
147
        # Append those from sample that are missing in the ranges passed-in
148
        service_uids = map(lambda rr: rr["uid"], rrs)
149
        rrs.extend(filter(lambda rr: rr["uid"] not in service_uids, sample_rrs))
150
151
        # Create a dict for easy access to results ranges
152
        return dict(map(lambda rr: (rr["uid"], rr), rrs))
153
154
    def resolve_range(self, result_range, sample_result_ranges):
155
        """Resolves the range by adding the uid if not present and filling the
156
        missing subfield values with those that come from the Sample
157
        specification if they are not present in the result_range passed-in
158
        """
159
        # Resolve result_range to make sure it contain uid subfield
160
        rrs = self.resolve_uid(result_range)
161
        uid = rrs.get("uid")
162
163
        for sample_rr in sample_result_ranges:
164
            if uid and sample_rr.get("uid") == uid:
165
                # Keep same fields from sample
166
                rr = sample_rr.copy()
167
                rr.update(rrs)
168
                return rr
169
170
        # Return the original with no changes
171
        return rrs
172
173 View Code Duplication
    def resolve_uid(self, result_range):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
174
        """Resolves the uid key for the result_range passed in if it does not
175
        exist when contains a keyword
176
        """
177
        value = result_range.copy()
178
        uid = value.get("uid")
179
        if api.is_uid(uid) and uid != "0":
180
            return value
181
182
        # uid key does not exist or is not valid, try to infere from keyword
183
        keyword = value.get("keyword")
184
        if keyword:
185
            query = dict(portal_type="AnalysisService", getKeyword=keyword)
186
            brains = api.search(query, SETUP_CATALOG)
187
            if len(brains) == 1:
188
                uid = api.get_uid(brains[0])
189
        value["uid"] = uid
190
        return value
191
192
    def resolve_conditions(self, analysis):
193
        """Returns the conditions to be applied to this analysis by merging
194
        those already set at sample level with defaults
195
        """
196
        service = analysis.getAnalysisService()
197
        default_conditions = service.getConditions()
198
199
        # Extract the conditions set for this analysis already
200
        existing = analysis.getConditions()
201
        existing_titles = [cond.get("title") for cond in existing]
202
203
        def is_missing(condition):
204
            return condition.get("title") not in existing_titles
205
206
        # Add only those conditions that are missing
207
        missing = filter(is_missing, default_conditions)
208
209
        # Sort them to match with same order as in service
210
        titles = [condition.get("title") for condition in default_conditions]
211
212
        def index(condition):
213
            cond_title = condition.get("title")
214
            if cond_title in titles:
215
                return titles.index(cond_title)
216
            return len(titles)
217
218
        conditions = existing + missing
219
        return sorted(conditions, key=lambda con: index(con))
220
221
    def add_analysis(self, instance, service, **kwargs):
222
        service_uid = api.get_uid(service)
223
224
        # Ensure we have suitable parameters
225
        specs = kwargs.get("specs") or {}
226
227
        # Get the hidden status for the service
228
        hidden = kwargs.get("hidden") or []
229
        hidden = filter(lambda d: d.get("uid") == service_uid, hidden)
230
        hidden = hidden and hidden[0].get("hidden") or service.getHidden()
231
232
        # Get the price for the service
233
        prices = kwargs.get("prices") or {}
234
        price = prices.get(service_uid) or service.getPrice()
235
236
        # Get the default result for the service
237
        default_result = service.getDefaultResult()
238
239
        # Gets the analysis or creates the analysis for this service
240
        # Note this returns a list, because is possible to have multiple
241
        # partitions with same analysis
242
        analyses = self.resolve_analyses(instance, service)
243
244
        # Filter out analyses in detached states
245
        # This allows to re-add an analysis that was retracted or cancelled
246
        analyses = filter(
247
            lambda an: api.get_workflow_status_of(an) not in DETACHED_STATES,
248
            analyses)
249
250
        if not analyses:
251
            # Create the analysis
252
            analysis = create_analysis(instance, service)
253
            analyses.append(analysis)
254
255
        for analysis in analyses:
256
            # Set the hidden status
257
            analysis.setHidden(hidden)
258
259
            # Set the price of the Analysis
260
            analysis.setPrice(price)
261
262
            # Set the internal use status
263
            parent_sample = analysis.getRequest()
264
            analysis.setInternalUse(parent_sample.getInternalUse())
265
266
            # Set the default result to the analysis
267
            if not analysis.getResult() and default_result:
268
                analysis.setResult(default_result)
269
                analysis.setResultCaptureDate(None)
270
271
            # Set the result range to the analysis
272
            analysis_rr = specs.get(service_uid) or analysis.getResultsRange()
273
            analysis.setResultsRange(analysis_rr)
274
275
            # Set default (pre)conditions
276
            conditions = self.resolve_conditions(analysis)
277
            analysis.setConditions(conditions)
278
279
            analysis.reindexObject()
280
281
    def remove_analysis(self, analysis):
282
        """Removes a given analysis from the instance
283
        """
284
        # Remember assigned attachments
285
        # https://github.com/senaite/senaite.core/issues/1025
286
        attachments = analysis.getAttachment()
287
        analysis.setAttachment([])
288
289
        # If assigned to a worksheet, unassign it before deletion
290
        worksheet = analysis.getWorksheet()
291
        if worksheet:
292
            worksheet.removeAnalysis(analysis)
293
294
        # handle retest source deleted
295
        retest = analysis.getRetest()
296
        if retest:
297
            # unset reference link
298
            retest.setRetestOf(None)
299
300
        # Remove the analysis
301
        # Note the analysis might belong to a partition
302
        analysis.aq_parent.manage_delObjects(ids=[api.get_id(analysis)])
303
304
        # Remove orphaned attachments
305
        for attachment in attachments:
306
            if not attachment.getLinkedAnalyses():
307
                # only delete attachments which are no further linked
308
                logger.info(
309
                    "Deleting attachment: {}".format(attachment.getId()))
310
                attachment_id = api.get_id(attachment)
311
                api.get_parent(attachment).manage_delObjects(attachment_id)
312
313
    def resolve_analyses(self, instance, service):
314
        """Resolves analyses for the service and instance
315
        It returns a list, cause for a given sample, multiple analyses for same
316
        service can exist due to the possibility of having multiple partitions
317
        """
318
        analyses = []
319
320
        # Does the analysis exists in this instance already?
321
        instance_analyses = self.get_from_instance(instance, service)
322
323
        if instance_analyses:
324
            analyses.extend(instance_analyses)
325
326
        # Does the analysis exists in an ancestor?
327
        from_ancestor = self.get_from_ancestor(instance, service)
328
        for ancestor_analysis in from_ancestor:
329
            # only move non-assigned analyses
330
            state = api.get_workflow_status_of(ancestor_analysis)
331
            if state != "unassigned":
332
                continue
333
            # Move the analysis into the partition
334
            analysis_id = api.get_id(ancestor_analysis)
335
            logger.info("Analysis {} is from an ancestor".format(analysis_id))
336
            cp = ancestor_analysis.aq_parent.manage_cutObjects(analysis_id)
337
            instance.manage_pasteObjects(cp)
338
            analyses.append(instance._getOb(analysis_id))
339
340
        # Does the analysis exists in descendants?
341
        from_descendant = self.get_from_descendant(instance, service)
342
        analyses.extend(from_descendant)
343
344
        return analyses
345
346
    def get_analyses_from_descendants(self, instance):
347
        """Returns all the analyses from descendants
348
        """
349
        analyses = []
350
        for descendant in instance.getDescendants(all_descendants=True):
351
            analyses.extend(descendant.objectValues("Analysis"))
352
        return analyses
353
354
    def get_from_instance(self, instance, service):
355
        """Returns analyses for the given service from the instance
356
        """
357
        service_uid = api.get_uid(service)
358
        analyses = instance.objectValues("Analysis")
359
        # Filter those analyses with same keyword. Note that a Sample can
360
        # contain more than one analysis with same keyword because of retests
361
        return filter(lambda an: an.getServiceUID() == service_uid, analyses)
362
363
    def get_from_ancestor(self, instance, service):
364
        """Returns analyses for the given service from ancestors
365
        """
366
        ancestor = instance.getParentAnalysisRequest()
367
        if not ancestor:
368
            return []
369
370
        analyses = self.get_from_instance(ancestor, service)
371
        return analyses or self.get_from_ancestor(ancestor, service)
372
373
    def get_from_descendant(self, instance, service):
374
        """Returns analyses for the given service from descendants
375
        """
376
        analyses = []
377
        for descendant in instance.getDescendants():
378
            # Does the analysis exists in the current descendant?
379
            descendant_analyses = self.get_from_instance(descendant, service)
380
            if descendant_analyses:
381
                analyses.extend(descendant_analyses)
382
383
            # Search in descendants from current descendant
384
            from_descendant = self.get_from_descendant(descendant, service)
385
            analyses.extend(from_descendant)
386
387
        return analyses
388
389
    def _to_service(self, thing):
390
        """Convert to Analysis Service
391
392
        :param thing: UID/Catalog Brain/Object/Something
393
        :returns: Analysis Service object or None
394
        """
395
396
        # Convert UIDs to objects
397
        if api.is_uid(thing):
398
            thing = api.get_object_by_uid(thing, None)
399
400
        # Bail out if the thing is not a valid object
401
        if not api.is_object(thing):
402
            logger.warn("'{}' is not a valid object!".format(repr(thing)))
403
            return None
404
405
        # Ensure we have an object here and not a brain
406
        obj = api.get_object(thing)
407
408
        if IAnalysisService.providedBy(obj):
409
            return obj
410
411
        if IAnalysis.providedBy(obj):
412
            return obj.getAnalysisService()
413
414
        # An object, but neither an Analysis nor AnalysisService?
415
        # This should never happen.
416
        portal_type = api.get_portal_type(obj)
417
        logger.error("ARAnalysesField doesn't accept objects from {} type. "
418
                     "The object will be dismissed.".format(portal_type))
419
        return None
420