main.main()   F
last analyzed

Complexity

Conditions 21

Size

Total Lines 90
Code Lines 59

Duplication

Lines 0
Ratio 0 %

Importance

Changes 0
Metric Value
eloc 59
dl 0
loc 90
rs 0
c 0
b 0
f 0
cc 21
nop 0

How to fix   Long Method    Complexity   

Long Method

Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.

For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.

Commonly applied refactorings include:

Complexity

Complex classes like main.main() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
import json
2
import mysql.connector
3
import config
4
from multiprocessing import Process
5
import time
6
import logging
7
from logging.handlers import RotatingFileHandler
8
import acquisition
9
10
11
def main():
12
    """main"""
13
    # create logger
14
    logger = logging.getLogger('myems-modbus-tcp')
15
    # specifies the lowest-severity log message a logger will handle,
16
    # where debug is the lowest built-in severity level and critical is the highest built-in severity.
17
    # For example, if the severity level is INFO, the logger will handle only INFO, WARNING, ERROR, and CRITICAL
18
    # messages and will ignore DEBUG messages.
19
    logger.setLevel(logging.ERROR)
20
    # create file handler which logs messages
21
    fh = RotatingFileHandler('myems-modbus-tcp.log', maxBytes=1024*1024, backupCount=1)
22
    # create formatter and add it to the handlers
23
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
24
    fh.setFormatter(formatter)
25
    # add the handlers to logger
26
    logger.addHandler(fh)
27
28
    # Get Data Sources
29
    while True:
30
        # TODO: This service has to RESTART to reload latest data sources and this should be fixed
31
        cnx_system_db = None
32
        cursor_system_db = None
33
        try:
34
            cnx_system_db = mysql.connector.connect(**config.myems_system_db)
35
            cursor_system_db = cnx_system_db.cursor()
36
        except Exception as e:
37
            logger.error("Error in main process " + str(e))
38
            if cursor_system_db:
39
                cursor_system_db.close()
40
            if cnx_system_db:
41
                cnx_system_db.close()
42
            # sleep several minutes and continue the outer loop to reload points
43
            time.sleep(60)
44
            continue
45
46
        # Get data sources by gateway and protocol
47
        try:
48
            query = (" SELECT ds.id, ds.name, ds.connection "
49
                     " FROM tbl_data_sources ds, tbl_gateways g "
50
                     " WHERE ds.protocol = 'modbus-tcp' AND ds.gateway_id = g.id AND g.id = %s AND g.token = %s "
51
                     " ORDER BY ds.id ")
52
            cursor_system_db.execute(query, (config.gateway['id'], config.gateway['token'],))
53
            rows_data_source = cursor_system_db.fetchall()
54
        except Exception as e:
55
            logger.error("Error in main process " + str(e))
56
            # sleep several minutes and continue the outer loop to reload points
57
            time.sleep(60)
58
            continue
59
        finally:
60
            if cursor_system_db:
61
                cursor_system_db.close()
62
            if cnx_system_db:
63
                cnx_system_db.close()
64
65
        if rows_data_source is None or len(rows_data_source) == 0:
0 ignored issues
show
introduced by
The variable rows_data_source does not seem to be defined for all execution paths.
Loading history...
66
            logger.error("Data Source Not Found, Wait for minutes to retry.")
67
            # wait for a while and retry
68
            time.sleep(60)
69
            continue
70
        else:
71
            # Stop to connect these data sources
72
            break
73
74
    for row_data_source in rows_data_source:
75
        print("Data Source: ID=%s, Name=%s, Connection=%s " %
76
              (row_data_source[0], row_data_source[1], row_data_source[2]))
77
78
        if row_data_source[2] is None or len(row_data_source[2]) == 0:
79
            logger.error("Data Source Connection Not Found.")
80
            continue
81
82
        try:
83
            server = json.loads(row_data_source[2])
84
        except Exception as e:
85
            logger.error("Data Source Connection JSON error " + str(e))
86
            continue
87
88
        if 'host' not in server.keys() \
89
                or 'port' not in server.keys() \
90
                or server['host'] is None \
91
                or server['port'] is None \
92
                or len(server['host']) == 0 \
93
                or not isinstance(server['port'], int) \
94
                or server['port'] < 1:
95
            logger.error("Data Source Connection Invalid.")
96
            continue
97
98
        # fork worker process for each data source
99
        # todo: how to restart the process if the process terminated unexpectedly
100
        Process(target=acquisition.process, args=(logger, row_data_source[0], server['host'], server['port'])).start()
101
102
103
if __name__ == "__main__":
104
    main()
105