Conditions | 18 |
Total Lines | 108 |
Lines | 0 |
Ratio | 0 % |
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
Complex classes like zipline.gens.AlgorithmSimulator.transform() often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.
Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.
1 | # |
||
91 | def transform(self): |
||
92 | """ |
||
93 | Main generator work loop. |
||
94 | """ |
||
95 | algo = self.algo |
||
96 | algo.data_portal = self.data_portal |
||
97 | handle_data = algo.event_manager.handle_data |
||
98 | current_data = self.current_data |
||
99 | |||
100 | data_portal = self.data_portal |
||
101 | |||
102 | # can't cache a pointer to algo.perf_tracker because we're not |
||
103 | # guaranteed that the algo doesn't swap out perf trackers during |
||
104 | # its lifetime. |
||
105 | # likewise, we can't cache a pointer to the blotter. |
||
106 | |||
107 | algo.perf_tracker.position_tracker.data_portal = data_portal |
||
108 | |||
109 | def every_bar(dt_to_use): |
||
110 | # called every tick (minute or day). |
||
111 | |||
112 | self.simulation_dt = dt_to_use |
||
113 | algo.on_dt_changed(dt_to_use) |
||
114 | |||
115 | blotter = algo.blotter |
||
116 | perf_tracker = algo.perf_tracker |
||
117 | |||
118 | # handle any transactions and commissions coming out new orders |
||
119 | # placed in the last bar |
||
120 | new_transactions, new_commissions = \ |
||
121 | blotter.get_transactions(current_data) |
||
122 | |||
123 | for transaction in new_transactions: |
||
124 | perf_tracker.process_transaction(transaction) |
||
125 | |||
126 | # since this order was modified, record it |
||
127 | order = blotter.orders[transaction.order_id] |
||
128 | perf_tracker.process_order(order) |
||
129 | |||
130 | if new_commissions: |
||
131 | for commission in new_commissions: |
||
132 | perf_tracker.process_commission(commission) |
||
133 | |||
134 | handle_data(algo, current_data, dt_to_use) |
||
135 | |||
136 | # grab any new orders from the blotter, then clear the list. |
||
137 | # this includes cancelled orders. |
||
138 | new_orders = blotter.new_orders |
||
139 | blotter.new_orders = [] |
||
140 | |||
141 | # if we have any new orders, record them so that we know |
||
142 | # in what perf period they were placed. |
||
143 | if new_orders: |
||
144 | for new_order in new_orders: |
||
145 | perf_tracker.process_order(new_order) |
||
146 | |||
147 | self.algo.portfolio_needs_update = True |
||
148 | self.algo.account_needs_update = True |
||
149 | self.algo.performance_needs_update = True |
||
150 | |||
151 | def once_a_day(midnight_dt): |
||
152 | # set all the timestamps |
||
153 | self.simulation_dt = midnight_dt |
||
154 | algo.on_dt_changed(midnight_dt) |
||
155 | |||
156 | # call before trading start |
||
157 | algo.before_trading_start(current_data) |
||
158 | |||
159 | perf_tracker = algo.perf_tracker |
||
160 | |||
161 | # handle any splits that impact any positions or any open orders. |
||
162 | sids_we_care_about = \ |
||
163 | list(set(list(perf_tracker.position_tracker.positions.keys()) + |
||
164 | list(algo.blotter.open_orders.keys()))) |
||
165 | |||
166 | if len(sids_we_care_about) > 0: |
||
167 | splits = data_portal.get_splits(sids_we_care_about, |
||
168 | midnight_dt) |
||
169 | if len(splits) > 0: |
||
170 | algo.blotter.process_splits(splits) |
||
171 | perf_tracker.position_tracker.handle_splits(splits) |
||
172 | |||
173 | def handle_benchmark(date): |
||
174 | algo.perf_tracker.all_benchmark_returns[date] = \ |
||
175 | self.benchmark_source.get_value(date) |
||
176 | |||
177 | with self.processor, ZiplineAPI(self.algo): |
||
178 | for dt, action in self.clock: |
||
179 | if action == BAR: |
||
180 | every_bar(dt) |
||
181 | elif action == DAY_START: |
||
182 | once_a_day(dt) |
||
183 | elif action == DAY_END: |
||
184 | # End of the day. |
||
185 | handle_benchmark(normalize_date(dt)) |
||
186 | yield self._get_daily_message(dt, algo, algo.perf_tracker) |
||
187 | elif action == MINUTE_END: |
||
188 | handle_benchmark(dt) |
||
189 | minute_msg, daily_msg = \ |
||
190 | self._get_minute_message(dt, algo, algo.perf_tracker) |
||
191 | |||
192 | yield minute_msg |
||
193 | |||
194 | if daily_msg: |
||
195 | yield daily_msg |
||
196 | |||
197 | risk_message = algo.perf_tracker.handle_simulation_end() |
||
198 | yield risk_message |
||
199 | |||
223 |