senaite.core.exportimport.instruments.importer   F
last analyzed

Complexity

Total Complexity 180

Size/Duplication

Total Lines 1034
Duplicated Lines 0 %

Importance

Changes 0
Metric Value
wmc 180
eloc 610
dl 0
loc 1034
rs 1.99
c 0
b 0
f 0

53 Methods

Rating   Name   Duplication   Size   Complexity  
A AnalysisResultsImporter.analysis_catalog() 0 3 1
A AnalysisResultsImporter.get_automatic_importer() 0 4 1
A AnalysisResultsImporter.services() 0 6 1
C AnalysisResultsImporter.convert_analysis_result() 0 28 9
A AnalysisResultsImporter.is_valid_keyword() 0 8 2
A AnalysisResultsImporter.bika_setup() 0 5 1
B AnalysisResultsImporter.can_override_analysis_result() 0 14 6
A AnalysisResultsImporter.save_submit_analysis() 0 7 2
A AnalysisResultsImporter.setup_catalog() 0 3 1
A AnalysisResultsImporter.getAllowedARStates() 0 8 1
A AnalysisResultsImporter.get_attachment_filenames() 0 10 3
A AnalysisResultsImporter.create_mime_attachmenttype() 0 10 2
A AnalysisResultsImporter.override_with_empty() 0 5 1
B AnalysisResultsImporter.attach_attachment() 0 32 5
A AnalysisResultsImporter.instrument() 0 5 2
A AnalysisResultsImporter.parser() 0 6 1
A AnalysisResultsImporter.override_non_empty() 0 5 1
A AnalysisResultsImporter.getOverride() 0 11 1
A AnalysisResultsImporter.ar_catalog() 0 5 1
A AnalysisResultsImporter.senaite_catalog() 0 3 1
B AnalysisResultsImporter.set_analysis_interims() 0 42 6
A AnalysisResultsImporter.sample_catalog() 0 3 1
A AnalysisResultsImporter._getObjects() 0 3 1
A AnalysisResultsImporter.get_analyses_for() 0 23 3
F AnalysisResultsImporter.process() 0 192 36
A AnalysisResultsImporter.calculateTotalResults() 0 35 5
A AnalysisResultsImporter.get_interim_fields() 0 7 2
F AnalysisResultsImporter._getZODBAnalysesFromAR() 0 32 16
A AnalysisResultsImporter.getKeywordsToBeExcluded() 0 4 1
B AnalysisResultsImporter.find_objects() 0 39 8
A AnalysisResultsImporter.get_automatic_parser() 0 4 1
A AnalysisResultsImporter.bc() 0 5 1
A AnalysisResultsImporter.wf_tool() 0 3 1
A AnalysisResultsImporter.is_analysis_allowed() 0 10 3
A AnalysisResultsImporter.setup() 0 5 1
A AnalysisResultsImporter.parse_results() 0 13 2
A AnalysisResultsImporter.getParser() 0 3 1
C AnalysisResultsImporter._getZODBAnalysesFromReferenceAnalyses() 0 48 9
A AnalysisResultsImporter._process_analysis() 0 3 1
B AnalysisResultsImporter.set_analysis_result() 0 57 5
A AnalysisResultsImporter.get_attachment_type_by_title() 0 15 2
A AnalysisResultsImporter.bsc() 0 5 1
A AnalysisResultsImporter.create_attachment() 0 28 2
A AnalysisResultsImporter.attachment_types() 0 5 1
A AnalysisResultsImporter.get_reference_sample_by_id() 0 8 2
A AnalysisResultsImporter.process_analysis() 0 27 2
A AnalysisResultsImporter.wf() 0 5 1
B AnalysisResultsImporter.keywords() 0 22 6
A AnalysisResultsImporter.bac() 0 5 1
A AnalysisResultsImporter.__init__() 0 42 4
A AnalysisResultsImporter._getZODBAnalyses() 0 3 1
A AnalysisResultsImporter.getAllowedAnalysisStates() 0 8 1
B AnalysisResultsImporter.set_analysis_fields() 0 48 8

How to fix   Complexity   

Complexity

Complex classes like senaite.core.exportimport.instruments.importer often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
# -*- coding: utf-8 -*-
2
#
3
# This file is part of SENAITE.CORE.
4
#
5
# SENAITE.CORE is free software: you can redistribute it and/or modify it under
6
# the terms of the GNU General Public License as published by the Free Software
7
# Foundation, version 2.
8
#
9
# This program is distributed in the hope that it will be useful, but WITHOUT
10
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
11
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
12
# details.
13
#
14
# You should have received a copy of the GNU General Public License along with
15
# this program; if not, write to the Free Software Foundation, Inc., 51
16
# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
17
#
18
# Copyright 2018-2025 by it's authors.
19
# Some rights reserved, see README and LICENSE.
20
21
import six
22
from bika.lims import api
23
from bika.lims import bikaMessageFactory as _
24
from bika.lims import logger
25
from bika.lims.interfaces import IReferenceAnalysis
26
from bika.lims.interfaces import IRoutineAnalysis
27
from plone.memoize.view import memoize_contextless
28
from senaite.core.api import dtime
29
from senaite.core.catalog import ANALYSIS_CATALOG
30
from senaite.core.catalog import SAMPLE_CATALOG
31
from senaite.core.catalog import SENAITE_CATALOG
32
from senaite.core.catalog import SETUP_CATALOG
33
from senaite.core.exportimport.instruments.logger import Logger
34
from senaite.core.i18n import translate as t
35
from senaite.core.registry import get_registry_record
36
from zope.cachedescriptors.property import Lazy as lazy_property
37
from zope.deprecation import deprecate
38
39
ALLOWED_SAMPLE_STATES = ["sample_received", "to_be_verified"]
40
ALLOWED_ANALYSIS_STATES = ["unassigned", "assigned", "to_be_verified"]
41
DEFAULT_RESULT_KEY = "DefaultResult"
42
EMPTY_MARKER = object()
43
44
45
class AnalysisResultsImporter(Logger):
46
    """Results importer
47
    """
48
    def __init__(self, parser, context,
49
                 override=None,
50
                 allowed_sample_states=None,
51
                 allowed_analysis_states=None,
52
                 instrument_uid=None):
53
        super(AnalysisResultsImporter, self).__init__()
54
55
        self.context = context
56
57
        # results override settings
58
        self.override = override
59
        if override is None:
60
            self.override = [False, False]
61
62
        # allowed sample states
63
        self.allowed_sample_states = allowed_sample_states
64
        if not allowed_sample_states:
65
            self.allowed_sample_states = ALLOWED_SAMPLE_STATES
66
        # translated states
67
        self.allowed_sample_states_msg = [
68
            t(_(s)) for s in self.allowed_sample_states]
69
70
        # allowed analyses states
71
        self.allowed_analysis_states = allowed_analysis_states
72
        if not allowed_analysis_states:
73
            self.allowed_analysis_states = ALLOWED_ANALYSIS_STATES
74
        self.allowed_analysis_states_msg = [
75
            t(_(s)) for s in self.allowed_analysis_states]
76
77
        # instrument UID
78
        self.instrument_uid = instrument_uid
79
        self.priorizedsearchcriteria = ""
80
        # Search Indexes for Sample IDs
81
        self.searchcriteria = ["getId", "getClientSampleID"]
82
83
        # BBB
84
        self._parser = parser
85
        self.allowed_ar_states = self.allowed_sample_states
86
        self._allowed_analysis_states = self.allowed_analysis_states
87
        self._override = self.override
88
        self._idsearch = ["getId", "getClientSampleID"]
89
        self._priorizedsearchcriteria = self.priorizedsearchcriteria
90
91
    @property
92
    @deprecate("Please use self.wf_tool instead")
93
    def wf(self):
94
        # BBB
95
        return self.wf_tool
96
97
    @property
98
    @deprecate("Please use self.sample_catalog instead")
99
    def ar_catalog(self):
100
        # BBB
101
        return self.sample_catalog
102
103
    @property
104
    @deprecate("Please use self.analysis_catalog instead")
105
    def bac(self):
106
        # BBB
107
        return self.analysis_catalog
108
109
    @property
110
    @deprecate("Please use self.senaite_catalog instead")
111
    def bc(self):
112
        # BBB
113
        return self.senaite_catalog
114
115
    @property
116
    @deprecate("Please use self.setup_catalog instead")
117
    def bsc(self):
118
        # BBB
119
        return self.setup_catalog
120
121
    @lazy_property
122
    def sample_catalog(self):
123
        return api.get_tool(SAMPLE_CATALOG)
124
125
    @lazy_property
126
    def analysis_catalog(self):
127
        return api.get_tool(ANALYSIS_CATALOG)
128
129
    @lazy_property
130
    def setup_catalog(self):
131
        return api.get_tool(SETUP_CATALOG)
132
133
    @lazy_property
134
    def senaite_catalog(self):
135
        return api.get_tool(SENAITE_CATALOG)
136
137
    @lazy_property
138
    def wf_tool(self):
139
        return api.get_tool("portal_workflow")
140
141
    @lazy_property
142
    def bika_setup(self):
143
        """Get the bika setup object
144
        """
145
        return api.get_bika_setup()
146
147
    @lazy_property
148
    def setup(self):
149
        """Get the Senaite setup object
150
        """
151
        return api.get_senaite_setup()
152
153
    @lazy_property
154
    def attachment_types(self):
155
        """Get the senaite setup object
156
        """
157
        return self.setup.attachmenttypes
158
159
    @lazy_property
160
    def instrument(self):
161
        if not self.instrument_uid:
162
            return None
163
        return api.get_object(self.instrument_uid, None)
164
165
    @lazy_property
166
    def services(self):
167
        """Return all services
168
        """
169
        services = self.setup_catalog(portal_type="AnalysisService")
170
        return list(map(api.get_object, services))
171
172
    @property
173
    def parser(self):
174
        """Returns the parser that is used for the import
175
        """
176
        # Maybe we can use an adapter lookup here?
177
        return self._parser
178
179
    @parser.setter
180
    def parser(self, value):
181
        self._parser = value
182
183
    @deprecate("Please use self.parser instead")
184
    def getParser(self):
185
        return self.parser
186
187
    def get_automatic_importer(self, instrument, parser, **kw):
188
        """Return the automatic importer
189
        """
190
        raise NotImplementedError("Must be provided by Adapter Implementation")
191
192
    def get_automatic_parser(self, infile, **kw):
193
        """Return the automatic parser
194
        """
195
        raise NotImplementedError("Must be provided by Adapter Implementation")
196
197
    @deprecate("Please use self.allowed_sample_states instead")
198
    def getAllowedARStates(self):
199
        """BBB: Return allowed sample states
200
201
        The results import will only take into account the analyses contained
202
        inside an Samples which current state is one from these.
203
        """
204
        return self.allowed_sample_states
205
206
    @deprecate("Please use self.allowed_sample_states instead")
207
    def getAllowedAnalysisStates(self):
208
        """BBB: Return allowed analysis states
209
210
        The results import will only take into account the analyses if its
211
        current state is in the allowed analysis states.
212
        """
213
        return self.allowed_analysis_states
214
215
    @deprecate("Please use self.override instead")
216
    def getOverride(self):
217
        """BBB: Return result override flags
218
219
        Flags if the importer can override previously entered results.
220
221
        [False, False]: Results are not overriden (default)
222
        [True, False]:  Results are overriden, but only when empty
223
        [True, True]:   Results are always overriden, also with empties
224
        """
225
        return self.override
226
227
    @property
228
    def override_non_empty(self):
229
        """Returns if the value can be written
230
        """
231
        return self.override[0] is True
232
233
    @property
234
    def override_with_empty(self):
235
        """Returns if the value can be written
236
        """
237
        return self.override[1] is True
238
239
    def can_override_analysis_result(self, analysis, result):
240
        """Checks if the result can be overwritten or not
241
242
        :returns: True if exisiting results can be overwritten
243
        """
244
        analysis_result = analysis.getResult()
245
        empty_result = False
246
        if not result:
247
            empty_result = len(str(result).strip()) == 0
248
        if analysis_result and not self.override_non_empty:
249
            return False
250
        elif empty_result and not self.override_with_empty:
251
            return False
252
        return True
253
254
    def convert_analysis_result(self, analysis, result):
255
        """Convert the analysis result
256
257
        :returns: Converted analysis result
258
        """
259
260
        if api.is_floatable(result) and not analysis.getStringResult():
261
            # ensure floatable string result containing a decimal point
262
            result = str(result)
263
            if "." not in result:
264
                result = "{}.0".format(result)
265
266
        result_options = analysis.getResultOptions()
267
        result_type = analysis.getResultType()
268
269
        if result_options:
270
            # NOTE: Result options can be set as integer or float values!
271
            result_values = map(
272
                lambda r: r.get("ResultValue"), result_options)
273
            if result_type == "select" and api.is_floatable(result):
274
                # check if the integer result matches a result option
275
                selection = str(int(float(result)))
276
                if selection in result_values:
277
                    # XXX: Results like e.g. "1.1" or 1.2 match result options
278
                    # with the value set to "1" as well!
279
                    return selection
280
281
        return result
282
283
    def getKeywordsToBeExcluded(self):
284
        """Returns a list of analysis keywords to be excluded
285
        """
286
        return []
287
288
    def parse_results(self):
289
        """Parse the results file and return the raw results
290
        """
291
        parsed = self.parser.parse()
292
293
        if not parsed:
294
            return {}
295
296
        self.errors = self.parser.errors
297
        self.warns = self.parser.warns
298
        self.logs = self.parser.logs
299
300
        return self.parser.getRawResults()
301
302
    @lazy_property
303
    def keywords(self):
304
        """Return the parsed keywords
305
        """
306
        keywords = []
307
        for keyword in self.parser.getAnalysisKeywords():
308
            if not keyword:
309
                continue
310
            if keyword in self.getKeywordsToBeExcluded():
311
                continue
312
            # check if keyword is valid
313
            if not self.is_valid_keyword(keyword):
314
                self.warn(_("Service keyword {analysis_keyword} not found"
315
                            .format(analysis_keyword=keyword)))
316
                continue
317
            # remember the valid service keyword
318
            keywords.append(keyword)
319
320
        if len(keywords) == 0:
321
            self.warn(_("No services could be found for parsed keywords"))
322
323
        return keywords
324
325
    @memoize_contextless
326
    def is_valid_keyword(self, keyword):
327
        """Check if the keyword is valid
328
        """
329
        results = self.setup_catalog(getKeyword=keyword)
330
        if not results:
331
            return False
332
        return True
333
334
    def get_reference_sample_by_id(self, sid):
335
        """Get a reference sample by ID
336
        """
337
        query = {"portal_type": "ReferenceSample", "getId": sid}
338
        results = api.search(query, SENAITE_CATALOG)
339
        if len(results) == 0:
340
            return None
341
        return api.get_object(results[0])
342
343
    def get_attachment_type_by_title(self, title):
344
        """Get an attachment type by title
345
346
        :param title: Attachment type title
347
        :returns: Attachment object or None
348
        """
349
        query = {
350
            "portal_type": "AttachmentType",
351
            "title": title,
352
            "is_active": True,
353
        }
354
        results = self.setup_catalog(query)
355
        if not len(results) > 0:
356
            return None
357
        return api.get_object(results[0])
358
359
    def process(self):
360
        parsed_results = self.parse_results()
361
362
        # no parsed results, return
363
        if not parsed_results:
364
            return False
365
366
        # Log allowed sample and analyses states
367
        self.log(_("Allowed sample states: {allowed_states}"
368
                   .format(allowed_states=", ".join(
369
                       self.allowed_sample_states_msg))))
370
        self.log(_("Allowed analysis states: {allowed_states}"
371
                   .format(allowed_states=", ".join(
372
                       self.allowed_analysis_states_msg))))
373
        if not any([self.override_non_empty, self.override_with_empty]):
374
            self.log(_("Don't override analysis results"))
375
        if self.override_non_empty:
376
            self.log(_("Override non-empty analysis results"))
377
        if self.override_with_empty:
378
            self.log(_("Override non-empty analysis results, also with empty"))
379
380
        # Attachments will be created in any worksheet that contains
381
        # analyses that are updated by this import
382
        attachments = {}
383
        infile = self.parser.getInputFile()
384
385
        analysis_attach_importfile = get_registry_record("import_analysis_attach_importfile")
386
387
        ancount = 0
388
        updated_analyses = []
389
        importedinsts = {}
390
        importedars = {}
391
392
        for sid, results in parsed_results.items():
393
            refsample = None
394
395
            # fetch all analyses for the given sample ID
396
            analyses = self.get_analyses_for(sid)
397
398
            # No registered analyses found, but maybe we need to
399
            # create them first if we have an instrument
400
            if len(analyses) == 0 and not self.instrument:
401
                self.warn(_("Instrument not found"))
402
                self.warn(_("No Sample with '{allowed_ar_states}' states"
403
                            "found, and no QC analyses found for {sid}"
404
                            .format(allowed_ar_states=", ".join(
405
                                self.allowed_sample_states_msg),
406
                                    sid=sid)))
407
                continue
408
409
            # we have an instrument
410
            elif len(analyses) == 0 and self.instrument:
411
                # Create a new ReferenceAnalysis and link it to the Instrument.
412
                refsample = self.get_reference_sample_by_id(sid)
413
                if not refsample:
414
                    self.warn(_("No Sample found for {sid}"
415
                                .format(sid=sid)))
416
                    continue
417
418
                # Allowed are more than one result for the same sample and
419
                # analysis. Needed for calibration tests.
420
                service_uids = []
421
                for result in results:
422
                    # For each keyword, create a ReferenceAnalysis and attach
423
                    # it to the ReferenceSample
424
                    service_uids.extend([
425
                        api.get_uid(service) for service in self.services
426
                        if service.getKeyword() in result.keys()])
427
428
                analyses = self.instrument.addReferences(
429
                    refsample, list(set(service_uids)))
430
431
            # No analyses found
432
            elif len(analyses) == 0:
433
                self.warn(_("No analyses found for {sid} "
434
                            "in the states '{allowed_sample_states}' "
435
                            .format(allowed_sample_states=", ".join(
436
                                self.allowed_sample_states_msg),
437
                                    sid=sid)))
438
                continue
439
440
            # import the results
441
            for result in results:
442
443
                for keyword, values in result.items():
444
445
                    # keyword might be excluded
446
                    if keyword not in self.keywords:
447
                        continue
448
449
                    ans = [a for a in analyses if a.getKeyword() == keyword
450
                           and api.get_workflow_status_of(a)
451
                           in self.allowed_analysis_states]
452
453
                    analysis = None
454
455
                    if len(ans) == 0:
456
                        # no analysis found for keyword
457
                        self.warn(_("No analyses found for {sid} "
458
                                    "and keyword '{keyword}'"
459
                                    .format(sid=sid, keyword=keyword)))
460
                        continue
461
                    elif len(ans) > 1:
462
                        # multiple analyses found for keyword
463
                        self.warn(_("More than one analysis found for "
464
                                    "{sid} and keyword '{keyword}'"
465
                                    .format(sid=sid, keyword=keyword)))
466
                        continue
467
                    else:
468
                        analysis = ans[0]
469
470
                    # Create attachment in worksheet linked to this analysis.
471
                    # Only if this import has not already created the
472
                    # attachment And only if the filename of the attachment is
473
                    # unique in this worksheet.
474
                    # Otherwise we will attempt to use existing attachment.
475
                    ws = analysis.getWorksheet()
476
                    if ws:
477
                        wsid = ws.getId()
478
                        if wsid not in attachments:
479
                            fn = infile.filename
480
                            fn_attachments = self.get_attachment_filenames(ws)
481
                            if fn in fn_attachments.keys():
482
                                attachments[wsid] = fn_attachments[fn]
483
                            else:
484
                                attachments[wsid] = self.create_attachment(
485
                                    ws, infile)
486
487
                    # Process the analysis
488
                    processed = self.process_analysis(sid, analysis, values)
489
490
                    if processed:
491
                        updated_analyses.append(analysis)
492
                        ancount += 1
493
494
                        if refsample and self.instrument:
495
                            inst = self.instrument
496
                            # Calibration Test (import to Instrument)
497
                            importedinst = inst.title in importedinsts.keys() \
498
                                and importedinsts[inst.title] or []
499
                            if keyword not in importedinst:
500
                                importedinst.append(keyword)
501
                            importedinsts[inst.title] = importedinst
502
                        else:
503
                            ar = analysis.portal_type == "Analysis" \
504
                                and analysis.aq_parent or None
505
                            if ar is not None:
506
                                importedar = ar.getId() in importedars.keys() \
507
                                            and importedars[ar.getId()] or []
508
                                if keyword not in importedar:
509
                                    importedar.append(keyword)
510
                                importedars[ar.getId()] = importedar
511
512
                        if ws and analysis_attach_importfile:
513
                            # attach import file
514
                            self.attach_attachment(
515
                                analysis, attachments[ws.getId()])
516
517
        # recalculate analyses with calculations after all results are set
518
        for analysis in updated_analyses:
519
            # only routine analyses can be used in calculations
520
            if IRoutineAnalysis.providedBy(analysis):
521
                sample_id = analysis.getRequestID()
522
                self.calculateTotalResults(sample_id, analysis)
523
524
        # reindex sample to update progress (and other indexes/metadata)
525
        samples = set(map(api.get_parent, updated_analyses))
526
        for sample in samples:
527
            sample.reindexObject()
528
529
        for arid, acodes in six.iteritems(importedars):
530
            acodesmsg = "Analysis %s" % ', '.join(acodes)
531
            self.log(_("{request_id}: {keywords} imported sucessfully"
532
                       .format(request_id=arid, keywords=acodesmsg)))
533
534
        for instid, acodes in six.iteritems(importedinsts):
535
            acodesmsg = "Analysis %s" % ', '.join(acodes)
536
            msg = "%s: %s %s" % (instid, acodesmsg, "imported sucessfully")
537
            self.log(msg)
538
539
        if refsample and self.instrument:
0 ignored issues
show
introduced by
The variable refsample does not seem to be defined in case the for loop on line 392 is not entered. Are you sure this can never be the case?
Loading history...
540
            self.log(_("Import finished successfully: {updated_ars} Samples, "
541
                       "{updated_instruments} Instruments and "
542
                       "{updated_results} results updated"
543
                       .format(updated_ars=str(len(importedars)),
544
                               updated_instruments=str(len(importedinsts)),
545
                               updated_results=str(ancount))))
546
        else:
547
            self.log(_("Import finished successfully: {updated_ars} Samples "
548
                       "and {updated_results} results updated"
549
                       .format(updated_ars=str(len(importedars)),
550
                               updated_results=str(ancount))))
551
552
    @deprecate("Please use self.process_analysis instead")
553
    def _process_analysis(self, sid, analysis, values):
554
        return self.process_analysis(sid, analysis, values)
555
556
    def process_analysis(self, sid, analysis, values):
557
        """Process a single analysis result
558
559
        :param sid: Sample ID
560
        :param analysis: Analysis object
561
        :param values: Dictionary of values, including the result to set
562
        :returns: True if the interims has been set
563
        """
564
565
        # set the analysis interim fields
566
        interims_updated = self.set_analysis_interims(sid, analysis, values)
567
568
        # set the analysis result
569
        result_updated = self.set_analysis_result(sid, analysis, values)
570
571
        # set additional field values
572
        fields_updated = self.set_analysis_fields(sid, analysis, values)
573
574
        # Nothing updated
575
        if not any([result_updated, interims_updated, fields_updated]):
576
            return False
577
578
        # submit the result
579
        self.save_submit_analysis(analysis)
580
        analysis.reindexObject()
581
582
        return True
583
584
    def set_analysis_interims(self, sid, analysis, values):
585
        """Set the analysis interim fields
586
587
        :param sid: Sample ID
588
        :param analysis: Analysis object
589
        :param values: Dictionary of values, including the result to set
590
        :returns: True if the interims were written
591
        """
592
        updated = False
593
        keys = values.keys()
594
        interims = self.get_interim_fields(analysis)
595
        interims_out = []
596
597
        for interim in interims:
598
            value = EMPTY_MARKER
599
            keyword = interim.get("keyword")
600
            title = interim.get("title")
601
            interim_copy = interim.copy()
602
            # Check if we have an interim value set
603
            if keyword in keys:
604
                value = values.get(keyword)
605
            elif title in keys:
606
                value = values.get(title)
607
            if value is not EMPTY_MARKER:
608
                # set the value
609
                interim_copy["value"] = value
610
                updated = True
611
                # TODO: change test not to rely on this logline!
612
                self.log(_("{sid} result for '{analysis_keyword}:"
613
                           "{interim_keyword}': '{value}'"
614
                           .format(sid=sid,
615
                                   analysis_keyword=analysis.getKeyword(),
616
                                   interim_keyword=keyword,
617
                                   value=str(value))))
618
            interims_out.append(interim_copy)
619
620
        # write back interims
621
        if len(interims_out) > 0:
622
            analysis.setInterimFields(interims_out)
623
            analysis.calculateResult(override=self.override[0])
624
625
        return updated
626
627
    def set_analysis_result(self, sid, analysis, values):
628
        """Set the analysis result field
629
630
        Results can be only set for Analyses with no calculation assigned.
631
632
        If the Analysis has already a result, it is only overridden
633
        when the right override option is set.
634
635
        :param sid: Sample ID
636
        :param analysis: Analysis object
637
        :param values: Dictionary of values, including the result to set
638
        :returns: True if the result was written
639
        """
640
        keyword = analysis.getKeyword()
641
        result_key = values.get(DEFAULT_RESULT_KEY, "")
642
        result = values.get(result_key, "")
643
        calculation = analysis.getCalculation()
644
645
        # check if analysis has a calculation set
646
        if calculation:
647
            self.log(_(u"Skipping result for analysis '{keyword}' of sample "
648
                       "'{sid}' with calculation '{calculation}'"
649
                       .format(keyword=keyword,
650
                               sid=sid,
651
                               calculation=api.safe_unicode(
652
                                   calculation.Title()))))
653
            return False
654
655
        # check if non-empty result can be overwritten
656
        if not self.can_override_analysis_result(analysis, result):
657
            self.log(_("Analysis '{keyword}' of sample '{sid}' has the "
658
                       "result '{result}' set, which is kept due to the "
659
                       "selected override option"
660
                       .format(sid=sid,
661
                               result=analysis.getResult(),
662
                               keyword=keyword)))
663
            return False
664
665
        # convert result for result options
666
        result = self.convert_analysis_result(analysis, result)
667
668
        # convert capture date if set
669
        date_captured = values.get("DateTime")
670
        if date_captured:
671
            date_captured = dtime.to_DT(date_captured)
672
673
        # set the analysis result
674
        analysis.setResult(result)
675
676
        # set the result capture date
677
        if date_captured:
678
            analysis.setResultCaptureDate(date_captured)
679
680
        self.log(_("{sid} result for '{keyword}': '{result}'"
681
                   .format(sid=sid, keyword=keyword, result=result)))
682
683
        return True
684
685
    def set_analysis_fields(self, sid, analysis, values):
686
        """Set additional analysis fields
687
688
        This allows to set additional analysis fields like
689
        Remarks, Uncertainty LDL/UDL etc.
690
691
        :param sid: Sample ID
692
        :param analysis: Analysis object
693
        :param values: Dictionary of values, including the result to set
694
        :returns: True if the result was written
695
        """
696
        updated = False
697
698
        fields = api.get_fields(analysis)
699
        interim_fields = self.get_interim_fields(analysis)
700
701
        for key, value in values.items():
702
            if key not in fields:
703
                # skip nonexisting fields
704
                continue
705
            elif key == "Result":
706
                # skip the result field
707
                continue
708
            elif key in interim_fields:
709
                # skip the interim fields
710
                continue
711
712
            field = fields.get(key)
713
            field_value = field.get(analysis)
714
715
            if field_value and not self.override_non_empty:
716
                # skip fields with existing values
717
                continue
718
719
            # set the new field value, preferrably with the setter
720
            setter = "set{}".format(field.getName().capitalize())
721
            mutator = getattr(analysis, setter, None)
722
            if mutator:
723
                # we have a setter
724
                mutator(value)
725
            else:
726
                # set with the field's set method
727
                field.set(analysis, value)
728
729
            updated = True
730
            self.log(_("{sid} Updated field '{field}' with '{value}'"
731
                       .format(sid=sid, field=key, value=value)))
732
        return updated
733
734
    def save_submit_analysis(self, analysis):
735
        """Submit analysis and ignore errors
736
        """
737
        try:
738
            api.do_transition_for(analysis, "submit")
739
        except api.APIError:
740
            pass
741
742
    def get_interim_fields(self, analysis):
743
        """Return the interim fields of the analysis
744
        """
745
        interim_fields = getattr(analysis, "getInterimFields", None)
746
        if not callable(interim_fields):
747
            return []
748
        return interim_fields()
749
750
    def calculateTotalResults(self, objid, analysis):
751
        """ If an AR(objid) has an analysis that has a calculation
752
        then check if param analysis is used on the calculations formula.
753
        Here we are dealing with two types of analysis.
754
        1. Calculated Analysis - Results are calculated.
755
        2. Analysis - Results are captured and not calculated
756
        :param objid: AR ID or Worksheet's Reference Sample IDs
757
        :param analysis: Analysis Object
758
        """
759
        for obj in self.get_analyses_for(objid):
760
            # skip analyses w/o calculations
761
            if not obj.getCalculation():
762
                continue
763
            # get the calculation
764
            calculation = obj.getCalculation()
765
            # get the dependent services of the calculation
766
            dependencies = calculation.getDependentServices()
767
            # get the analysis service of the passed in analysis
768
            service = analysis.getAnalysisService()
769
            # skip when service is not a dependency of the calculation
770
            if service not in dependencies:
771
                continue
772
            # recalculate analysis result
773
            success = obj.calculateResult(override=self.override[0])
774
            if success:
775
                self.save_submit_analysis(obj)
776
                obj.reindexObject(idxs=["Result"])
777
                self.log(_("{request_id}: calculated result for "
778
                           "'{analysis_keyword}': '{analysis_result}'"
779
                           .format(request_id=objid,
780
                                   analysis_keyword=obj.getKeyword(),
781
                                   analysis_result=str(obj.getResult()))))
782
                # recursively recalculate analyses that have this analysis as
783
                # a dependent service
784
                self.calculateTotalResults(objid, obj)
785
786
    def create_attachment(self, ws, infile):
787
        """Create a new attachment in the attachment
788
789
        :param ws: Worksheet
790
        :param infile: upload file wrapper
791
        :returns: Attachment object
792
        """
793
        if not infile:
794
            return None
795
796
        att_type = self.create_mime_attachmenttype()
797
        filename = infile.filename
798
799
        attachment = api.create(ws, "Attachment")
800
        attachment.edit(
801
            title=filename,
802
            AttachmentFile=infile,
803
            AttachmentType=api.get_uid(att_type),
804
            AttachmentKeys="Results, Automatic import",
805
            RenderInReport=False,
806
        )
807
        attachment.reindexObject()
808
809
        logger.info(_(u"Attached file '{filename}' to worksheet {worksheet}"
810
                      .format(filename=api.safe_unicode(filename),
811
                              worksheet=ws.getId())))
812
813
        return attachment
814
815
    def create_mime_attachmenttype(self):
816
        """Create (or get) an attachment filetype
817
        """
818
        file_type = self.parser.getAttachmentFileType()
819
        obj = self.get_attachment_type_by_title(file_type)
820
        if not obj:
821
            obj = api.create(self.attachment_types, "AttachmentType")
822
            obj.edit(title=file_type,
823
                     description="Auto generated")
824
        return obj
825
826
    def attach_attachment(self, analysis, attachment):
827
        """Attach a file or a given set of files to an analysis
828
829
        :param analysis: analysis where the files are to be attached
830
        :param attachment: files to be attached. This can be either a
831
        single file or a list of files
832
        :return: None
833
        """
834
        if not attachment:
835
            return
836
        if isinstance(attachment, list):
837
            for attach in attachment:
838
                self.attach_attachment(analysis, attach)
839
            return
840
        # current attachments
841
        an_atts = analysis.getAttachment()
842
        atts_filenames = [att.getAttachmentFile().filename for att in an_atts]
843
        filename = attachment.getAttachmentFile().filename
844
845
        if filename not in atts_filenames:
846
            an_atts.append(attachment)
847
            logger.info(
848
                _(u"Attaching '{attachment}' to Analysis '{analysis}'"
849
                  .format(attachment=api.safe_unicode(filename),
850
                          analysis=analysis.getKeyword())))
851
            analysis.setAttachment([att.UID() for att in an_atts])
852
            analysis.reindexObject()
853
        else:
854
            self.log(_(u"Attachment '{attachment}' was already linked "
855
                       "to analysis {analysis}"
856
                       .format(attachment=api.safe_unicode(filename),
857
                               analysis=analysis.getKeyword())))
858
859
    def get_attachment_filenames(self, ws):
860
        """Returns all attachment filenames in the given worksheet
861
        """
862
        fn_attachments = {}
863
        for att in ws.objectValues("Attachment"):
864
            fn = att.getAttachmentFile().filename
865
            if fn not in fn_attachments:
866
                fn_attachments[fn] = []
867
            fn_attachments[fn].append(att)
868
        return fn_attachments
869
870
    def is_analysis_allowed(self, analysis):
871
        """Filter analyses that match the import criteria
872
        """
873
        if IReferenceAnalysis.providedBy(analysis):
874
            return True
875
        # Routine Analyses must be in the allowed WF states
876
        status = api.get_workflow_status_of(analysis)
877
        if status in self.allowed_analysis_states:
878
            return True
879
        return False
880
881
    def get_analyses_for(self, sid):
882
        """Get analyses for the given sample ID
883
884
        Only analyses that in the allowed analyses states are returned.
885
        If not a ReferenceAnalysis, allowed sample states are also checked.
886
887
        :param sid: sample ID or Worksheet Reference Sample ID
888
        :returns: list of analyses / empty list if no alanyses were found
889
        """
890
        analyses = []
891
892
        # Acceleration of searches using priorization
893
        if self.priorizedsearchcriteria in ["rgid", "rid", "ruid"]:
894
            # Look from reference analyses
895
            analyses = self._getZODBAnalysesFromReferenceAnalyses(
896
                    sid, self.priorizedsearchcriteria)
897
898
        if len(analyses) == 0:
899
            # Look from ar and derived
900
            analyses = self._getZODBAnalysesFromAR(
901
                sid, "", self.searchcriteria, self.allowed_sample_states)
902
903
        return list(filter(self.is_analysis_allowed, analyses))
904
905
    @deprecate("Please use self.find_objects instead")
906
    def _getObjects(self, oid, criteria, states):
907
        return self.find_objects(oid, criteria, states)
908
909
    def find_objects(self, oid, criteria, states):
910
        """Find objects
911
912
        :param oid: Primary search ID
913
        """
914
        results = []
915
916
        if criteria in ["arid"]:
917
            query = {"getId": oid, "review_state": states}
918
            results = self.sample_catalog(query)
919
        elif criteria == "csid":
920
            query = {"getClientSampleID": oid, "review_state": states}
921
            results = self.sample_catalog(query)
922
        elif criteria == "aruid":
923
            query = {"UID": oid, "review_state": states}
924
            results = self.sample_catalog(query)
925
        elif criteria == "rgid":
926
            query = {
927
                "portal_type": ["ReferenceAnalysis", "DuplicateAnalysis"],
928
                "getReferenceAnalysesGroupID": oid,
929
            }
930
            results = self.analysis_catalog(query)
931
        elif criteria == "rid":
932
            query = {
933
                "portal_type": ["ReferenceAnalysis", "DuplicateAnalysis"],
934
                "getId": oid,
935
            }
936
            results = self.analysis_catalog(query)
937
        elif criteria == "ruid":
938
            query = {
939
                "portal_type": ["ReferenceAnalysis", "DuplicateAnalysis"],
940
                "UID": oid,
941
            }
942
            results = self.analysis_catalog(query)
943
944
        if len(results) > 0:
945
            self.priorizedsearchcriteria = criteria
946
947
        return results
948
949
    @deprecate("Please use self.get_analyses_for instead")
950
    def _getZODBAnalyses(self, sid):
951
        return self.get_analyses_for(sid)
952
953
    def _getZODBAnalysesFromAR(self, objid, criteria,
954
                               allowedsearches, arstates):
955
        ars = []
956
        analyses = []
957
        if criteria:
958
            ars = self.find_objects(objid, criteria, arstates)
959
            if not ars or len(ars) == 0:
960
                return self._getZODBAnalysesFromAR(objid, None,
961
                                                   allowedsearches, arstates)
962
        else:
963
            sortorder = ["arid", "csid", "aruid"]
964
            for crit in sortorder:
965
                if (crit == "arid" and "getId" in allowedsearches) \
966
                    or (crit == "csid" and "getClientSampleID"
967
                                in allowedsearches) \
968
                        or (crit == "aruid" and "getId" in allowedsearches):
969
                    ars = self.find_objects(objid, crit, arstates)
970
                    if ars and len(ars) > 0:
971
                        break
972
973
        if not ars or len(ars) == 0:
974
            return self._getZODBAnalysesFromReferenceAnalyses(objid, None)
975
976
        elif len(ars) > 1:
977
            self.err("More than one Sample found for {object_id}"
978
                     .format(object_id=objid))
979
            return []
980
981
        ar = ars[0].getObject()
982
        analyses = [analysis.getObject() for analysis in ar.getAnalyses()]
983
984
        return analyses
985
986
    def _getZODBAnalysesFromReferenceAnalyses(self, objid, criteria):
987
        analyses = []
988
        if criteria:
989
            refans = self.find_objects(objid, criteria, [])
990
            if len(refans) == 0:
991
                return []
992
993
            elif criteria == "rgid":
994
                return [an.getObject() for an in refans]
995
996
            elif len(refans) == 1:
997
                # The search has been made using the internal identifier
998
                # from a Reference Analysis (id or uid). That is not usual.
999
                an = refans[0].getObject()
1000
                worksheet = an.getWorksheet()
1001
                if worksheet:
1002
                    # A regular QC test (assigned to a Worksheet)
1003
                    return [an, ]
1004
                elif an.getInstrument():
1005
                    # An Internal Calibration Test
1006
                    return [an, ]
1007
                else:
1008
                    # Oops. This should never happen!
1009
                    # A ReferenceAnalysis must be always assigned to
1010
                    # a Worksheet (Regular QC) or to an Instrument
1011
                    # (Internal Calibration Test)
1012
                    self.err("The Reference Analysis {object_id} has neither "
1013
                             "instrument nor worksheet assigned"
1014
                             .format(object_id=objid))
1015
                    return []
1016
            else:
1017
                # This should never happen!
1018
                # Fetching ReferenceAnalysis for its id or uid should
1019
                # *always* return a unique result
1020
                self.err(
1021
                    "More than one Reference Analysis found for {object_id}"
1022
                    .format(object_id=objid))
1023
                return []
1024
1025
        else:
1026
            sortorder = ["rgid", "rid", "ruid"]
1027
            for crit in sortorder:
1028
                analyses = self._getZODBAnalysesFromReferenceAnalyses(objid,
1029
                                                                      crit)
1030
                if len(analyses) > 0:
1031
                    return analyses
1032
1033
        return analyses
1034