Passed
Push — 2.x ( 13ead5...9cfaa4 )
by Ramon
06:52
created

senaite.core.datamanagers.field.sample_analyses   D

Complexity

Total Complexity 59

Size/Duplication

Total Lines 411
Duplicated Lines 4.38 %

Importance

Changes 0
Metric Value
wmc 59
eloc 208
dl 18
loc 411
rs 4.08
c 0
b 0
f 0

15 Methods

Rating   Name   Duplication   Size   Complexity  
A SampleAnalysesFieldDataManager.get() 0 25 2
A SampleAnalysesFieldDataManager.__init__() 0 4 1
A SampleAnalysesFieldDataManager.remove_analysis() 0 31 5
C SampleAnalysesFieldDataManager.set() 0 67 9
A SampleAnalysesFieldDataManager.resolve_range() 0 18 4
A SampleAnalysesFieldDataManager.get_from_ancestor() 0 9 2
A SampleAnalysesFieldDataManager.resolve_conditions() 0 28 3
A SampleAnalysesFieldDataManager.get_analyses_from_descendants() 0 7 2
A SampleAnalysesFieldDataManager._to_service() 0 31 5
A SampleAnalysesFieldDataManager.resolve_specs() 0 21 5
A SampleAnalysesFieldDataManager.resolve_uid() 18 18 5
A SampleAnalysesFieldDataManager.get_from_descendant() 0 15 3
A SampleAnalysesFieldDataManager.get_from_instance() 0 8 2
B SampleAnalysesFieldDataManager.add_analysis() 0 59 7
A SampleAnalysesFieldDataManager.resolve_analyses() 0 32 4

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like senaite.core.datamanagers.field.sample_analyses often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
# -*- coding: utf-8 -*-
2
3
import itertools
4
5
from AccessControl import Unauthorized
6
from bika.lims import api
7
from bika.lims import logger
8
from bika.lims.api.security import check_permission
9
from bika.lims.interfaces import IAnalysis
10
from bika.lims.interfaces import IAnalysisService
11
from bika.lims.interfaces import ISubmitted
12
from bika.lims.utils.analysis import create_analysis
13
from senaite.core.catalog import ANALYSIS_CATALOG
14
from senaite.core.catalog import SETUP_CATALOG
15
from senaite.core.datamanagers.base import FieldDataManager
16
from senaite.core.permissions import AddAnalysis
17
18
DETACHED_STATES = ["cancelled", "retracted", "rejected"]
19
20
21
class SampleAnalysesFieldDataManager(FieldDataManager):
22
    """Data Manager for Routine Analyses
23
    """
24
    def __init__(self, context, request, field):
25
        self.context = context
26
        self.request = request
27
        self.field = field
28
29
    def get(self, **kw):
30
        """Returns a list of Analyses assigned to this AR
31
32
        Return a list of catalog brains unless `full_objects=True` is passed.
33
        Other keyword arguments are passed to senaite_catalog_analysis
34
35
        :param instance: Analysis Request object
36
        :param kwargs: Keyword arguments to inject in the search query
37
        :returns: A list of Analysis Objects/Catalog Brains
38
        """
39
        # Filter out parameters from kwargs that don't match with indexes
40
        catalog = api.get_tool(ANALYSIS_CATALOG)
41
        indexes = catalog.indexes()
42
        query = dict([(k, v) for k, v in kw.items() if k in indexes])
43
44
        query["portal_type"] = "Analysis"
45
        query["getAncestorsUIDs"] = api.get_uid(self.context)
46
        query["sort_on"] = kw.get("sort_on", "sortable_title")
47
        query["sort_order"] = kw.get("sort_order", "ascending")
48
49
        # Do the search against the catalog
50
        brains = catalog(query)
51
        if kw.get("full_objects", False):
52
            return map(api.get_object, brains)
53
        return brains
54
55
    def set(self, items, prices, specs, hidden, **kw):
56
        """Set/Assign Analyses to this AR
57
58
        :param items: List of Analysis objects/brains, AnalysisService
59
                      objects/brains and/or Analysis Service uids
60
        :type items: list
61
        :param prices: Mapping of AnalysisService UID -> price
62
        :type prices: dict
63
        :param specs: List of AnalysisService UID -> Result Range mappings
64
        :type specs: list
65
        :param hidden: List of AnalysisService UID -> Hidden mappings
66
        :type hidden: list
67
        :returns: list of new assigned Analyses
68
        """
69
70
        if items is None:
71
            items = []
72
73
        # Bail out if the items is not a list type
74
        if not isinstance(items, (list, tuple)):
75
            raise TypeError(
76
                "Items parameter must be a tuple or list, got '{}'".format(
77
                    type(items)))
78
79
        # Bail out if the AR is inactive
80
        if not api.is_active(self.context):
81
            raise Unauthorized("Inactive ARs can not be modified")
82
83
        # Bail out if the user has not the right permission
84
        if not check_permission(AddAnalysis, self.context):
85
            raise Unauthorized("You do not have the '{}' permission"
86
                               .format(AddAnalysis))
87
88
        # Convert the items to a valid list of AnalysisServices
89
        services = filter(None, map(self._to_service, items))
90
91
        # Calculate dependencies
92
        dependencies = map(lambda s: s.getServiceDependencies(), services)
93
        dependencies = list(itertools.chain.from_iterable(dependencies))
94
95
        # Merge dependencies and services
96
        services = set(services + dependencies)
97
98
        # Modify existing AR specs with new form values of selected analyses
99
        specs = self.resolve_specs(self.context, specs)
100
101
        # Add analyses
102
        params = dict(prices=prices, hidden=hidden, specs=specs)
103
        map(lambda serv: self.add_analysis(self.context, serv, **params), services)
104
105
        # Get all analyses (those from descendants included)
106
        analyses = self.context.objectValues("Analysis")
107
        analyses.extend(self.get_analyses_from_descendants(self.context))
108
109
        # Bail out those not in services list or submitted
110
        uids = map(api.get_uid, services)
111
        to_remove = filter(lambda an: an.getServiceUID() not in uids, analyses)
112
        to_remove = filter(lambda an: not ISubmitted.providedBy(an), to_remove)
113
114
        # Remove analyses
115
        map(self.remove_analysis, to_remove)
116
117
        # Store the uids in instance's attribute for this field
118
        # Note we only store the UIDs of the contained analyses!
119
        contained = self.context.objectValues("Analysis")
120
        contained_uids = [analysis.UID() for analysis in contained]
121
        self.field.setRaw(self.context, contained_uids)
122
123
    def resolve_specs(self, instance, results_ranges):
124
        """Returns a dictionary where the key is the service_uid and the value
125
        is its results range. The dictionary is made by extending the
126
        results_ranges passed-in with the Sample's ResultsRanges (a copy of the
127
        specifications initially set)
128
        """
129
        rrs = results_ranges or []
130
131
        # Sample's Results ranges
132
        sample_rrs = instance.getResultsRange()
133
134
        # Ensure all subfields from specification are kept and missing values
135
        # for subfields are filled in accordance with the specs
136
        rrs = map(lambda rr: self.resolve_range(rr, sample_rrs), rrs)
137
138
        # Append those from sample that are missing in the ranges passed-in
139
        service_uids = map(lambda rr: rr["uid"], rrs)
140
        rrs.extend(filter(lambda rr: rr["uid"] not in service_uids, sample_rrs))
141
142
        # Create a dict for easy access to results ranges
143
        return dict(map(lambda rr: (rr["uid"], rr), rrs))
144
145
    def resolve_range(self, result_range, sample_result_ranges):
146
        """Resolves the range by adding the uid if not present and filling the
147
        missing subfield values with those that come from the Sample
148
        specification if they are not present in the result_range passed-in
149
        """
150
        # Resolve result_range to make sure it contain uid subfield
151
        rrs = self.resolve_uid(result_range)
152
        uid = rrs.get("uid")
153
154
        for sample_rr in sample_result_ranges:
155
            if uid and sample_rr.get("uid") == uid:
156
                # Keep same fields from sample
157
                rr = sample_rr.copy()
158
                rr.update(rrs)
159
                return rr
160
161
        # Return the original with no changes
162
        return rrs
163
164 View Code Duplication
    def resolve_uid(self, result_range):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
165
        """Resolves the uid key for the result_range passed in if it does not
166
        exist when contains a keyword
167
        """
168
        value = result_range.copy()
169
        uid = value.get("uid")
170
        if api.is_uid(uid) and uid != "0":
171
            return value
172
173
        # uid key does not exist or is not valid, try to infere from keyword
174
        keyword = value.get("keyword")
175
        if keyword:
176
            query = dict(portal_type="AnalysisService", getKeyword=keyword)
177
            brains = api.search(query, SETUP_CATALOG)
178
            if len(brains) == 1:
179
                uid = api.get_uid(brains[0])
180
        value["uid"] = uid
181
        return value
182
183
    def resolve_conditions(self, analysis):
184
        """Returns the conditions to be applied to this analysis by merging
185
        those already set at sample level with defaults
186
        """
187
        service = analysis.getAnalysisService()
188
        default_conditions = service.getConditions()
189
190
        # Extract the conditions set for this analysis already
191
        existing = analysis.getConditions()
192
        existing_titles = [cond.get("title") for cond in existing]
193
194
        def is_missing(condition):
195
            return condition.get("title") not in existing_titles
196
197
        # Add only those conditions that are missing
198
        missing = filter(is_missing, default_conditions)
199
200
        # Sort them to match with same order as in service
201
        titles = [condition.get("title") for condition in default_conditions]
202
203
        def index(condition):
204
            cond_title = condition.get("title")
205
            if cond_title in titles:
206
                return titles.index(cond_title)
207
            return len(titles)
208
209
        conditions = existing + missing
210
        return sorted(conditions, key=lambda con: index(con))
211
212
    def add_analysis(self, instance, service, **kwargs):
213
        service_uid = api.get_uid(service)
214
215
        # Ensure we have suitable parameters
216
        specs = kwargs.get("specs") or {}
217
218
        # Get the hidden status for the service
219
        hidden = kwargs.get("hidden") or []
220
        hidden = filter(lambda d: d.get("uid") == service_uid, hidden)
221
        hidden = hidden and hidden[0].get("hidden") or service.getHidden()
222
223
        # Get the price for the service
224
        prices = kwargs.get("prices") or {}
225
        price = prices.get(service_uid) or service.getPrice()
226
227
        # Get the default result for the service
228
        default_result = service.getDefaultResult()
229
230
        # Gets the analysis or creates the analysis for this service
231
        # Note this returns a list, because is possible to have multiple
232
        # partitions with same analysis
233
        analyses = self.resolve_analyses(instance, service)
234
235
        # Filter out analyses in detached states
236
        # This allows to re-add an analysis that was retracted or cancelled
237
        analyses = filter(
238
            lambda an: api.get_workflow_status_of(an) not in DETACHED_STATES,
239
            analyses)
240
241
        if not analyses:
242
            # Create the analysis
243
            analysis = create_analysis(instance, service)
244
            analyses.append(analysis)
245
246
        for analysis in analyses:
247
            # Set the hidden status
248
            analysis.setHidden(hidden)
249
250
            # Set the price of the Analysis
251
            analysis.setPrice(price)
252
253
            # Set the internal use status
254
            parent_sample = analysis.getRequest()
255
            analysis.setInternalUse(parent_sample.getInternalUse())
256
257
            # Set the default result to the analysis
258
            if not analysis.getResult() and default_result:
259
                analysis.setResult(default_result)
260
                analysis.setResultCaptureDate(None)
261
262
            # Set the result range to the analysis
263
            analysis_rr = specs.get(service_uid) or analysis.getResultsRange()
264
            analysis.setResultsRange(analysis_rr)
265
266
            # Set default (pre)conditions
267
            conditions = self.resolve_conditions(analysis)
268
            analysis.setConditions(conditions)
269
270
            analysis.reindexObject()
271
272
    def remove_analysis(self, analysis):
273
        """Removes a given analysis from the instance
274
        """
275
        # Remember assigned attachments
276
        # https://github.com/senaite/senaite.core/issues/1025
277
        attachments = analysis.getAttachment()
278
        analysis.setAttachment([])
279
280
        # If assigned to a worksheet, unassign it before deletion
281
        worksheet = analysis.getWorksheet()
282
        if worksheet:
283
            worksheet.removeAnalysis(analysis)
284
285
        # handle retest source deleted
286
        retest = analysis.getRetest()
287
        if retest:
288
            # unset reference link
289
            retest.setRetestOf(None)
290
291
        # Remove the analysis
292
        # Note the analysis might belong to a partition
293
        analysis.aq_parent.manage_delObjects(ids=[api.get_id(analysis)])
294
295
        # Remove orphaned attachments
296
        for attachment in attachments:
297
            if not attachment.getLinkedAnalyses():
298
                # only delete attachments which are no further linked
299
                logger.info(
300
                    "Deleting attachment: {}".format(attachment.getId()))
301
                attachment_id = api.get_id(attachment)
302
                api.get_parent(attachment).manage_delObjects(attachment_id)
303
304
    def resolve_analyses(self, instance, service):
305
        """Resolves analyses for the service and instance
306
        It returns a list, cause for a given sample, multiple analyses for same
307
        service can exist due to the possibility of having multiple partitions
308
        """
309
        analyses = []
310
311
        # Does the analysis exists in this instance already?
312
        instance_analyses = self.get_from_instance(instance, service)
313
314
        if instance_analyses:
315
            analyses.extend(instance_analyses)
316
317
        # Does the analysis exists in an ancestor?
318
        from_ancestor = self.get_from_ancestor(instance, service)
319
        for ancestor_analysis in from_ancestor:
320
            # only move non-assigned analyses
321
            state = api.get_workflow_status_of(ancestor_analysis)
322
            if state != "unassigned":
323
                continue
324
            # Move the analysis into the partition
325
            analysis_id = api.get_id(ancestor_analysis)
326
            logger.info("Analysis {} is from an ancestor".format(analysis_id))
327
            cp = ancestor_analysis.aq_parent.manage_cutObjects(analysis_id)
328
            instance.manage_pasteObjects(cp)
329
            analyses.append(instance._getOb(analysis_id))
330
331
        # Does the analysis exists in descendants?
332
        from_descendant = self.get_from_descendant(instance, service)
333
        analyses.extend(from_descendant)
334
335
        return analyses
336
337
    def get_analyses_from_descendants(self, instance):
338
        """Returns all the analyses from descendants
339
        """
340
        analyses = []
341
        for descendant in instance.getDescendants(all_descendants=True):
342
            analyses.extend(descendant.objectValues("Analysis"))
343
        return analyses
344
345
    def get_from_instance(self, instance, service):
346
        """Returns analyses for the given service from the instance
347
        """
348
        service_uid = api.get_uid(service)
349
        analyses = instance.objectValues("Analysis")
350
        # Filter those analyses with same keyword. Note that a Sample can
351
        # contain more than one analysis with same keyword because of retests
352
        return filter(lambda an: an.getServiceUID() == service_uid, analyses)
353
354
    def get_from_ancestor(self, instance, service):
355
        """Returns analyses for the given service from ancestors
356
        """
357
        ancestor = instance.getParentAnalysisRequest()
358
        if not ancestor:
359
            return []
360
361
        analyses = self.get_from_instance(ancestor, service)
362
        return analyses or self.get_from_ancestor(ancestor, service)
363
364
    def get_from_descendant(self, instance, service):
365
        """Returns analyses for the given service from descendants
366
        """
367
        analyses = []
368
        for descendant in instance.getDescendants():
369
            # Does the analysis exists in the current descendant?
370
            descendant_analyses = self.get_from_instance(descendant, service)
371
            if descendant_analyses:
372
                analyses.extend(descendant_analyses)
373
374
            # Search in descendants from current descendant
375
            from_descendant = self.get_from_descendant(descendant, service)
376
            analyses.extend(from_descendant)
377
378
        return analyses
379
380
    def _to_service(self, thing):
381
        """Convert to Analysis Service
382
383
        :param thing: UID/Catalog Brain/Object/Something
384
        :returns: Analysis Service object or None
385
        """
386
387
        # Convert UIDs to objects
388
        if api.is_uid(thing):
389
            thing = api.get_object_by_uid(thing, None)
390
391
        # Bail out if the thing is not a valid object
392
        if not api.is_object(thing):
393
            logger.warn("'{}' is not a valid object!".format(repr(thing)))
394
            return None
395
396
        # Ensure we have an object here and not a brain
397
        obj = api.get_object(thing)
398
399
        if IAnalysisService.providedBy(obj):
400
            return obj
401
402
        if IAnalysis.providedBy(obj):
403
            return obj.getAnalysisService()
404
405
        # An object, but neither an Analysis nor AnalysisService?
406
        # This should never happen.
407
        portal_type = api.get_portal_type(obj)
408
        logger.error("ARAnalysesField doesn't accept objects from {} type. "
409
                     "The object will be dismissed.".format(portal_type))
410
        return None
411