Conditions | 10 |
Total Lines | 59 |
Code Lines | 44 |
Lines | 0 |
Ratio | 0 % |
Tests | 0 |
CRAP Score | 110 |
Changes | 0 |
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
Complex classes like build_all_guides.main() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.
Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.
1 | #!/usr/bin/python3 |
||
49 | def main(): |
||
50 | args = parse_args() |
||
51 | |||
52 | input_path, input_basename, path_base, output_dir = \ |
||
53 | ssg.build_guides.get_path_args(args) |
||
54 | index_path = os.path.join(output_dir, "%s-guide-index.html" % (path_base)) |
||
55 | |||
56 | if args.cmd == "list_inputs": |
||
57 | print(input_path) |
||
58 | sys.exit(0) |
||
59 | |||
60 | input_tree = ssg.xml.ElementTree.parse(input_path) |
||
61 | benchmarks = ssg.xccdf.get_benchmark_id_title_map(input_tree) |
||
62 | if len(benchmarks) == 0: |
||
63 | raise RuntimeError( |
||
64 | "Expected input file '%s' to contain at least 1 xccdf:Benchmark. " |
||
65 | "No Benchmarks were found!" % |
||
66 | (input_path) |
||
67 | ) |
||
68 | |||
69 | benchmark_profile_pairs = ssg.build_guides.get_benchmark_profile_pairs( |
||
70 | input_tree, benchmarks) |
||
71 | |||
72 | if args.cmd == "list_outputs": |
||
73 | guide_paths = ssg.build_guides.get_output_guide_paths(benchmarks, |
||
74 | benchmark_profile_pairs, |
||
75 | path_base, output_dir) |
||
76 | |||
77 | for guide_path in guide_paths: |
||
78 | print(guide_path) |
||
79 | print(index_path) |
||
80 | sys.exit(0) |
||
81 | |||
82 | index_links, index_options, index_initial_src, queue = \ |
||
83 | ssg.build_guides.fill_queue(benchmarks, benchmark_profile_pairs, |
||
84 | input_path, path_base, output_dir) |
||
85 | |||
86 | workers = [] |
||
87 | for worker_id in range(args.jobs): |
||
88 | worker = threading.Thread( |
||
89 | name="Guide generate worker #%i" % (worker_id), |
||
90 | target=lambda queue=queue: ssg.build_guides.builder(queue) |
||
91 | ) |
||
92 | workers.append(worker) |
||
93 | worker.daemon = True |
||
94 | worker.start() |
||
95 | |||
96 | for worker in workers: |
||
97 | worker.join() |
||
98 | |||
99 | if queue.unfinished_tasks > 0: |
||
100 | raise RuntimeError("Some of the guides were not exported successfully") |
||
101 | |||
102 | index_source = ssg.build_guides.build_index(benchmarks, input_basename, |
||
103 | index_links, index_options, |
||
104 | index_initial_src) |
||
105 | |||
106 | with open(index_path, "wb") as f: |
||
107 | f.write(index_source.encode("utf-8")) |
||
108 | |||
112 |