Passed
Pull Request — dev (#1052)
by
unknown
01:42
created

data.datasets.DSM_cts_ind   B

Complexity

Total Complexity 52

Size/Duplication

Total Lines 1471
Duplicated Lines 9.31 %

Importance

Changes 0
Metric Value
wmc 52
eloc 734
dl 137
loc 1471
rs 7.306
c 0
b 0
f 0

1 Method

Rating   Name   Duplication   Size   Complexity  
A dsm_Potential.__init__() 0 6 1

18 Functions

Rating   Name   Duplication   Size   Complexity  
A ind_sites_vent_data_import() 35 35 2
A ind_sites_vent_data_import_individual() 36 36 2
A delete_dsm_entries() 0 63 1
A dsm_cts_ind_processing() 0 4 1
B data_export() 0 110 1
A ind_sites_data_import() 0 21 1
F calculate_potentials() 0 105 14
A col_per_unit() 0 4 1
A ind_osm_data_import_individual() 33 35 2
A cts_data_import() 0 39 2
A ind_osm_data_import() 32 32 2
B dsm_cts_ind() 0 297 1
A relate_to_schmidt_sites() 0 25 1
C create_dsm_components() 0 165 6
A calc_ind_site_timeseries() 0 71 2
B dsm_cts_ind_individual() 0 230 1
C aggregate_components() 0 75 9
A calc_per_unit() 0 5 2

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like data.datasets.DSM_cts_ind often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
import geopandas as gpd
2
import numpy as np
3
import pandas as pd
4
5
from egon.data import config, db
6
from egon.data.datasets import Dataset
7
from egon.data.datasets.electricity_demand.temporal import calc_load_curve
8
from egon.data.datasets.industry.temporal import identify_bus
9
10
# CONSTANTS
11
# TODO: move to datasets.yml
12
CON = db.engine()
13
14
# CTS
15
CTS_COOL_VENT_AC_SHARE = 0.22
16
17
S_FLEX_CTS = 0.5
18
S_UTIL_CTS = 0.67
19
S_INC_CTS = 1
20
S_DEC_CTS = 0
21
DELTA_T_CTS = 1
22
23
# industry
24
IND_VENT_COOL_SHARE = 0.039
25
IND_VENT_SHARE = 0.017
26
27
# OSM
28
S_FLEX_OSM = 0.5
29
S_UTIL_OSM = 0.73
30
S_INC_OSM = 0.9
31
S_DEC_OSM = 0.5
32
DELTA_T_OSM = 1
33
34
# paper
35
S_FLEX_PAPER = 0.15
36
S_UTIL_PAPER = 0.86
37
S_INC_PAPER = 0.95
38
S_DEC_PAPER = 0
39
DELTA_T_PAPER = 3
40
41
# recycled paper
42
S_FLEX_RECYCLED_PAPER = 0.7
43
S_UTIL_RECYCLED_PAPER = 0.85
44
S_INC_RECYCLED_PAPER = 0.95
45
S_DEC_RECYCLED_PAPER = 0
46
DELTA_T_RECYCLED_PAPER = 3
47
48
# pulp
49
S_FLEX_PULP = 0.7
50
S_UTIL_PULP = 0.83
51
S_INC_PULP = 0.95
52
S_DEC_PULP = 0
53
DELTA_T_PULP = 2
54
55
# cement
56
S_FLEX_CEMENT = 0.61
57
S_UTIL_CEMENT = 0.65
58
S_INC_CEMENT = 0.95
59
S_DEC_CEMENT = 0
60
DELTA_T_CEMENT = 4
61
62
# wz 23
63
WZ = 23
64
65
S_FLEX_WZ = 0.5
66
S_UTIL_WZ = 0.8
67
S_INC_WZ = 1
68
S_DEC_WZ = 0.5
69
DELTA_T_WZ = 1
70
71
72
class dsm_Potential(Dataset):
73
    def __init__(self, dependencies):
74
        super().__init__(
75
            name="DSM_potentials",
76
            version="0.0.4.dev",
77
            dependencies=dependencies,
78
            tasks=(dsm_cts_ind_processing),
79
        )
80
81
82
def cts_data_import(cts_cool_vent_ac_share):
83
    """
84
    Import CTS data necessary to identify DSM-potential.
85
        ----------
86
    cts_share: float
87
        Share of cooling, ventilation and AC in CTS demand
88
    """
89
90
    # import load data
91
92
    sources = config.datasets()["DSM_CTS_industry"]["sources"][
93
        "cts_loadcurves"
94
    ]
95
96
    ts = db.select_dataframe(
97
        f"""SELECT bus_id, scn_name, p_set FROM
98
        {sources['schema']}.{sources['table']}"""
99
    )
100
101
    # identify relevant columns and prepare df to be returned
102
103
    dsm = pd.DataFrame(index=ts.index)
104
105
    dsm["bus"] = ts["bus_id"].copy()
106
    dsm["scn_name"] = ts["scn_name"].copy()
107
    dsm["p_set"] = ts["p_set"].copy()
108
109
    # calculate share of timeseries for air conditioning, cooling and
110
    # ventilation out of CTS-data
111
112
    timeseries = dsm["p_set"].copy()
113
114
    for index, liste in timeseries.iteritems():
115
        share = [float(item) * cts_cool_vent_ac_share for item in liste]
116
        timeseries.loc[index] = share
117
118
    dsm["p_set"] = timeseries.copy()
119
120
    return dsm
121
122
123 View Code Duplication
def ind_osm_data_import(ind_vent_cool_share):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
124
    """
125
    Import industry data per osm-area necessary to identify DSM-potential.
126
        ----------
127
    ind_share: float
128
        Share of considered application in industry demand
129
    """
130
131
    # import load data
132
133
    sources = config.datasets()["DSM_CTS_industry"]["sources"][
134
        "ind_osm_loadcurves"
135
    ]
136
137
    dsm = db.select_dataframe(
138
        f"""SELECT bus, scn_name, p_set FROM
139
        {sources['schema']}.{sources['table']}"""
140
    )
141
142
    # calculate share of timeseries for cooling and ventilation out of
143
    # industry-data
144
145
    timeseries = dsm["p_set"].copy()
146
147
    for index, liste in timeseries.iteritems():
148
        share = [float(item) * ind_vent_cool_share for item in liste]
149
150
        timeseries.loc[index] = share
151
152
    dsm["p_set"] = timeseries.copy()
153
154
    return dsm
155
156
157 View Code Duplication
def ind_osm_data_import_individual(ind_vent_cool_share):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
158
    """
159
    Import industry data per osm-area necessary to identify DSM-potential.
160
        ----------
161
    ind_share: float
162
        Share of considered application in industry demand
163
    """
164
165
    # import load data
166
167
    sources = config.datasets()["DSM_CTS_industry"]["sources"][
168
        "ind_osm_loadcurves_individual"
169
    ]
170
171
    dsm = db.select_dataframe(
172
        f"""
173
        SELECT osm_id, bus_id as bus, scn_name, p_set FROM
174
        {sources['schema']}.{sources['table']}
175
        WHERE scn_name in ('eGon2035', 'eGon100RE')
176
        """
177
    )
178
179
    # calculate share of timeseries for cooling and ventilation out of
180
    # industry-data
181
182
    timeseries = dsm["p_set"].copy()
183
184
    for index, liste in timeseries.iteritems():
185
        share = [float(item) * ind_vent_cool_share for item in liste]
186
187
        timeseries.loc[index] = share
188
189
    dsm["p_set"] = timeseries.copy()
190
191
    return dsm
192
193
194 View Code Duplication
def ind_sites_vent_data_import(ind_vent_share, wz):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
195
    """
196
    Import industry sites necessary to identify DSM-potential.
197
        ----------
198
    ind_vent_share: float
199
        Share of considered application in industry demand
200
    wz: int
201
        Wirtschaftszweig to be considered within industry sites
202
    """
203
204
    # import load data
205
206
    sources = config.datasets()["DSM_CTS_industry"]["sources"][
207
        "ind_sites_loadcurves"
208
    ]
209
210
    dsm = db.select_dataframe(
211
        f"""
212
        SELECT bus, scn_name, p_set FROM
213
        {sources['schema']}.{sources['table']}
214
        WHERE wz = '{wz}'
215
        """
216
    )
217
218
    # calculate share of timeseries for ventilation
219
220
    timeseries = dsm["p_set"].copy()
221
222
    for index, liste in timeseries.iteritems():
223
        share = [float(item) * ind_vent_share for item in liste]
224
        timeseries.loc[index] = share
225
226
    dsm["p_set"] = timeseries.copy()
227
228
    return dsm
229
230
231 View Code Duplication
def ind_sites_vent_data_import_individual(ind_vent_share, wz):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
232
    """
233
    Import industry sites necessary to identify DSM-potential.
234
        ----------
235
    ind_vent_share: float
236
        Share of considered application in industry demand
237
    wz: int
238
        Wirtschaftszweig to be considered within industry sites
239
    """
240
241
    # import load data
242
243
    sources = config.datasets()["DSM_CTS_industry"]["sources"][
244
        "ind_sites_loadcurves_individual"
245
    ]
246
247
    dsm = db.select_dataframe(
248
        f"""
249
        SELECT site_id, bus_id as bus, scn_name, p_set FROM
250
        {sources['schema']}.{sources['table']}
251
        WHERE scn_name IN ('eGon2035', 'eGon100RE')
252
        AND wz = '{wz}'
253
        """
254
    )
255
256
    # calculate share of timeseries for ventilation
257
258
    timeseries = dsm["p_set"].copy()
259
260
    for index, liste in timeseries.iteritems():
261
        share = [float(item) * ind_vent_share for item in liste]
262
        timeseries.loc[index] = share
263
264
    dsm["p_set"] = timeseries.copy()
265
266
    return dsm
267
268
269
def calc_ind_site_timeseries(scenario):
270
    # calculate timeseries per site
271
    # -> using code from egon.data.datasets.industry.temporal:
272
    # calc_load_curves_ind_sites
273
274
    # select demands per industrial site including the subsector information
275
    source1 = config.datasets()["DSM_CTS_industry"]["sources"][
276
        "demandregio_ind_sites"
277
    ]
278
279
    demands_ind_sites = db.select_dataframe(
280
        f"""SELECT industrial_sites_id, wz, demand
281
            FROM {source1['schema']}.{source1['table']}
282
            WHERE scenario = '{scenario}'
283
            AND demand > 0
284
            """
285
    ).set_index(["industrial_sites_id"])
286
287
    # select industrial sites as demand_areas from database
288
    source2 = config.datasets()["DSM_CTS_industry"]["sources"]["ind_sites"]
289
290
    demand_area = db.select_geodataframe(
291
        f"""SELECT id, geom, subsector FROM
292
            {source2['schema']}.{source2['table']}""",
293
        index_col="id",
294
        geom_col="geom",
295
        epsg=3035,
296
    )
297
298
    # replace entries to bring it in line with demandregio's subsector
299
    # definitions
300
    demands_ind_sites.replace(1718, 17, inplace=True)
301
    share_wz_sites = demands_ind_sites.copy()
302
303
    # create additional df on wz_share per industrial site, which is always set
304
    # to one as the industrial demand per site is subsector specific
305
    share_wz_sites.demand = 1
306
    share_wz_sites.reset_index(inplace=True)
307
308
    share_transpose = pd.DataFrame(
309
        index=share_wz_sites.industrial_sites_id.unique(),
310
        columns=share_wz_sites.wz.unique(),
311
    )
312
    share_transpose.index.rename("industrial_sites_id", inplace=True)
313
    for wz in share_transpose.columns:
314
        share_transpose[wz] = (
315
            share_wz_sites[share_wz_sites.wz == wz]
316
            .set_index("industrial_sites_id")
317
            .demand
318
        )
319
320
    # calculate load curves
321
    load_curves = calc_load_curve(share_transpose, demands_ind_sites["demand"])
322
323
    # identify bus per industrial site
324
    curves_bus = identify_bus(load_curves, demand_area)
325
    curves_bus.index = curves_bus["id"].astype(int)
326
327
    # initialize dataframe to be returned
328
329
    ts = pd.DataFrame(
330
        data=curves_bus["bus_id"], index=curves_bus["id"].astype(int)
331
    )
332
    ts["scenario_name"] = scenario
333
    curves_bus.drop({"id", "bus_id", "geom"}, axis=1, inplace=True)
334
    ts["p_set"] = curves_bus.values.tolist()
335
336
    # add subsector to relate to Schmidt's tables afterwards
337
    ts["application"] = demand_area["subsector"]
338
339
    return ts
340
341
342
def relate_to_schmidt_sites(dsm):
343
    # import industrial sites by Schmidt
344
345
    source = config.datasets()["DSM_CTS_industry"]["sources"][
346
        "ind_sites_schmidt"
347
    ]
348
349
    schmidt = db.select_dataframe(
350
        f"""SELECT application, geom FROM
351
            {source['schema']}.{source['table']}"""
352
    )
353
354
    # relate calculated timeseries (dsm) to Schmidt's industrial sites
355
356
    applications = np.unique(schmidt["application"])
357
    dsm = pd.DataFrame(dsm[dsm["application"].isin(applications)])
358
359
    # initialize dataframe to be returned
360
361
    dsm.rename(
362
        columns={"scenario_name": "scn_name", "bus_id": "bus"},
363
        inplace=True,
364
    )
365
366
    return dsm
367
368
369
def ind_sites_data_import():
370
    """
371
    Import industry sites data necessary to identify DSM-potential.
372
    """
373
    # calculate timeseries per site
374
375
    # scenario eGon2035
376
    dsm_2035 = calc_ind_site_timeseries("eGon2035")
377
    dsm_2035.reset_index(inplace=True)
378
    # scenario eGon100RE
379
    dsm_100 = calc_ind_site_timeseries("eGon100RE")
380
    dsm_100.reset_index(inplace=True)
381
    # bring df for both scenarios together
382
    dsm_100.index = range(len(dsm_2035), (len(dsm_2035) + len((dsm_100))))
383
    dsm = dsm_2035.append(dsm_100)
384
385
    # relate calculated timeseries to Schmidt's industrial sites
386
387
    dsm = relate_to_schmidt_sites(dsm)
388
389
    return dsm[["application", "id", "bus", "scn_name", "p_set"]]
390
391
392
def calculate_potentials(s_flex, s_util, s_inc, s_dec, delta_t, dsm):
393
    """
394
    Calculate DSM-potential per bus using the methods by Heitkoetter et. al.:
395
        https://doi.org/10.1016/j.adapen.2020.100001
396
    Parameters
397
        ----------
398
    s_flex: float
399
        Feasability factor to account for socio-technical restrictions
400
    s_util: float
401
        Average annual utilisation rate
402
    s_inc: float
403
        Shiftable share of installed capacity up to which load can be
404
        increased considering technical limitations
405
    s_dec: float
406
        Shiftable share of installed capacity up to which load can be
407
        decreased considering technical limitations
408
    delta_t: int
409
        Maximum shift duration in hours
410
    dsm: DataFrame
411
        List of existing buses with DSM-potential including timeseries of
412
        loads
413
    """
414
415
    # copy relevant timeseries
416
    timeseries = dsm["p_set"].copy()
417
418
    # calculate scheduled load L(t)
419
420
    scheduled_load = timeseries.copy()
421
422
    for index, liste in scheduled_load.iteritems():
423
        share = []
424
        for item in liste:
425
            share.append(item * s_flex)
426
        scheduled_load.loc[index] = share
427
428
    # calculate maximum capacity Lambda
429
430
    # calculate energy annual requirement
431
    energy_annual = pd.Series(index=timeseries.index, dtype=float)
432
    for index, liste in timeseries.iteritems():
433
        energy_annual.loc[index] = sum(liste)
434
435
    # calculate Lambda
436
    lam = (energy_annual * s_flex) / (8760 * s_util)
437
438
    # calculation of P_max and P_min
439
440
    # P_max
441
    p_max = scheduled_load.copy()
442
    for index, liste in scheduled_load.iteritems():
443
        lamb = lam.loc[index]
444
        p = []
445
        for item in liste:
446
            value = lamb * s_inc - item
447
            if value < 0:
448
                value = 0
449
            p.append(value)
450
        p_max.loc[index] = p
451
452
    # P_min
453
    p_min = scheduled_load.copy()
454
    for index, liste in scheduled_load.iteritems():
455
        lamb = lam.loc[index]
456
        p = []
457
        for item in liste:
458
            value = -(item - lamb * s_dec)
459
            if value > 0:
460
                value = 0
461
            p.append(value)
462
        p_min.loc[index] = p
463
464
    # calculation of E_max and E_min
465
466
    e_max = scheduled_load.copy()
467
    e_min = scheduled_load.copy()
468
469
    for index, liste in scheduled_load.iteritems():
470
        emin = []
471
        emax = []
472
        for i in range(len(liste)):
473
            if i + delta_t > len(liste):
474
                emax.append(
475
                    (sum(liste[i:]) + sum(liste[: delta_t - (len(liste) - i)]))
476
                )
477
            else:
478
                emax.append(sum(liste[i : i + delta_t]))
479
            if i - delta_t < 0:
480
                emin.append(
481
                    (
482
                        -1
483
                        * (
484
                            (
485
                                sum(liste[:i])
486
                                + sum(liste[len(liste) - delta_t + i :])
487
                            )
488
                        )
489
                    )
490
                )
491
            else:
492
                emin.append(-1 * sum(liste[i - delta_t : i]))
493
        e_max.loc[index] = emax
494
        e_min.loc[index] = emin
495
496
    return p_max, p_min, e_max, e_min
497
498
499
def create_dsm_components(con, p_max, p_min, e_max, e_min, dsm):
500
    """
501
    Create components representing DSM.
502
    Parameters
503
        ----------
504
    con :
505
        Connection to database
506
    p_max: DataFrame
507
        Timeseries identifying maximum load increase
508
    p_min: DataFrame
509
        Timeseries identifying maximum load decrease
510
    e_max: DataFrame
511
        Timeseries identifying maximum energy amount to be preponed
512
    e_min: DataFrame
513
        Timeseries identifying maximum energy amount to be postponed
514
    dsm: DataFrame
515
        List of existing buses with DSM-potential including timeseries of loads
516
    """
517
518
    # calculate P_nom and P per unit
519
    p_nom = pd.Series(index=p_max.index, dtype=float)
520
    for index, row in p_max.iteritems():
521
        nom = max(max(row), abs(min(p_min.loc[index])))
522
        p_nom.loc[index] = nom
523
        new = [element / nom for element in row]
524
        p_max.loc[index] = new
525
        new = [element / nom for element in p_min.loc[index]]
526
        p_min.loc[index] = new
527
528
    # calculate E_nom and E per unit
529
    e_nom = pd.Series(index=p_min.index, dtype=float)
530
    for index, row in e_max.iteritems():
531
        nom = max(max(row), abs(min(e_min.loc[index])))
532
        e_nom.loc[index] = nom
533
        new = [element / nom for element in row]
534
        e_max.loc[index] = new
535
        new = [element / nom for element in e_min.loc[index]]
536
        e_min.loc[index] = new
537
538
    # add DSM-buses to "original" buses
539
    dsm_buses = gpd.GeoDataFrame(index=dsm.index)
540
    dsm_buses["original_bus"] = dsm["bus"].copy()
541
    dsm_buses["scn_name"] = dsm["scn_name"].copy()
542
543
    # get original buses and add copy of relevant information
544
    target1 = config.datasets()["DSM_CTS_industry"]["targets"]["bus"]
545
    original_buses = db.select_geodataframe(
546
        f"""SELECT bus_id, v_nom, scn_name, x, y, geom FROM
547
            {target1['schema']}.{target1['table']}""",
548
        geom_col="geom",
549
        epsg=4326,
550
    )
551
552
    # copy relevant information from original buses to DSM-buses
553
    dsm_buses["index"] = dsm_buses.index
554
    originals = original_buses[
555
        original_buses["bus_id"].isin(np.unique(dsm_buses["original_bus"]))
556
    ]
557
    dsm_buses = originals.merge(
558
        dsm_buses,
559
        left_on=["bus_id", "scn_name"],
560
        right_on=["original_bus", "scn_name"],
561
    )
562
    dsm_buses.index = dsm_buses["index"]
563
    dsm_buses.drop(["bus_id", "index"], axis=1, inplace=True)
564
565
    # new bus_ids for DSM-buses
566
    max_id = original_buses["bus_id"].max()
567
    if np.isnan(max_id):
568
        max_id = 0
569
    dsm_id = max_id + 1
570
    bus_id = pd.Series(index=dsm_buses.index, dtype=int)
571
572
    # Get number of DSM buses for both scenarios
573
    rows_per_scenario = (
574
        dsm_buses.groupby("scn_name").count().original_bus.to_dict()
575
    )
576
577
    # Assignment of DSM ids
578
    bus_id.iloc[: rows_per_scenario.get("eGon2035", 0)] = range(
579
        dsm_id, dsm_id + rows_per_scenario.get("eGon2035", 0)
580
    )
581
582
    bus_id.iloc[
583
        rows_per_scenario.get("eGon2035", 0) : rows_per_scenario.get(
584
            "eGon2035", 0
585
        )
586
        + rows_per_scenario.get("eGon100RE", 0)
587
    ] = range(dsm_id, dsm_id + rows_per_scenario.get("eGon100RE", 0))
588
589
    dsm_buses["bus_id"] = bus_id
590
591
    # add links from "orignal" buses to DSM-buses
592
593
    dsm_links = pd.DataFrame(index=dsm_buses.index)
594
    dsm_links["original_bus"] = dsm_buses["original_bus"].copy()
595
    dsm_links["dsm_bus"] = dsm_buses["bus_id"].copy()
596
    dsm_links["scn_name"] = dsm_buses["scn_name"].copy()
597
598
    # set link_id
599
    target2 = config.datasets()["DSM_CTS_industry"]["targets"]["link"]
600
    sql = f"""SELECT link_id FROM {target2['schema']}.{target2['table']}"""
601
    max_id = pd.read_sql_query(sql, con)
602
    max_id = max_id["link_id"].max()
603
    if np.isnan(max_id):
604
        max_id = 0
605
    dsm_id = max_id + 1
606
    link_id = pd.Series(index=dsm_buses.index, dtype=int)
607
608
    # Assignment of link ids
609
    link_id.iloc[: rows_per_scenario.get("eGon2035", 0)] = range(
610
        dsm_id, dsm_id + rows_per_scenario.get("eGon2035", 0)
611
    )
612
613
    link_id.iloc[
614
        rows_per_scenario.get("eGon2035", 0) : rows_per_scenario.get(
615
            "eGon2035", 0
616
        )
617
        + rows_per_scenario.get("eGon100RE", 0)
618
    ] = range(dsm_id, dsm_id + rows_per_scenario.get("eGon100RE", 0))
619
620
    dsm_links["link_id"] = link_id
621
622
    # add calculated timeseries to df to be returned
623
    dsm_links["p_nom"] = p_nom
624
    dsm_links["p_min"] = p_min
625
    dsm_links["p_max"] = p_max
626
627
    # add DSM-stores
628
629
    dsm_stores = pd.DataFrame(index=dsm_buses.index)
630
    dsm_stores["bus"] = dsm_buses["bus_id"].copy()
631
    dsm_stores["scn_name"] = dsm_buses["scn_name"].copy()
632
    dsm_stores["original_bus"] = dsm_buses["original_bus"].copy()
633
634
    # set store_id
635
    target3 = config.datasets()["DSM_CTS_industry"]["targets"]["store"]
636
    sql = f"""SELECT store_id FROM {target3['schema']}.{target3['table']}"""
637
    max_id = pd.read_sql_query(sql, con)
638
    max_id = max_id["store_id"].max()
639
    if np.isnan(max_id):
640
        max_id = 0
641
    dsm_id = max_id + 1
642
    store_id = pd.Series(index=dsm_buses.index, dtype=int)
643
644
    # Assignment of store ids
645
    store_id.iloc[: rows_per_scenario.get("eGon2035", 0)] = range(
646
        dsm_id, dsm_id + rows_per_scenario.get("eGon2035", 0)
647
    )
648
649
    store_id.iloc[
650
        rows_per_scenario.get("eGon2035", 0) : rows_per_scenario.get(
651
            "eGon2035", 0
652
        )
653
        + rows_per_scenario.get("eGon100RE", 0)
654
    ] = range(dsm_id, dsm_id + rows_per_scenario.get("eGon100RE", 0))
655
656
    dsm_stores["store_id"] = store_id
657
658
    # add calculated timeseries to df to be returned
659
    dsm_stores["e_nom"] = e_nom
660
    dsm_stores["e_min"] = e_min
661
    dsm_stores["e_max"] = e_max
662
663
    return dsm_buses, dsm_links, dsm_stores
664
665
666
def aggregate_components(df_dsm_buses, df_dsm_links, df_dsm_stores):
667
    # aggregate buses
668
669
    grouper = [df_dsm_buses.original_bus, df_dsm_buses.scn_name]
670
671
    df_dsm_buses = df_dsm_buses.groupby(grouper).first()
672
673
    df_dsm_buses.reset_index(inplace=True)
674
    df_dsm_buses.sort_values("scn_name", inplace=True)
675
676
    # aggregate links
677
678
    df_dsm_links["p_max"] = df_dsm_links["p_max"].apply(lambda x: np.array(x))
679
    df_dsm_links["p_min"] = df_dsm_links["p_min"].apply(lambda x: np.array(x))
680
681
    grouper = [df_dsm_links.original_bus, df_dsm_links.scn_name]
682
    p_nom = df_dsm_links.groupby(grouper)["p_nom"].sum()
683
    p_max = df_dsm_links.groupby(grouper)["p_max"].apply(np.sum)
684
    p_min = df_dsm_links.groupby(grouper)["p_min"].apply(np.sum)
685
686
    df_dsm_links = df_dsm_links.groupby(grouper).first()
687
    df_dsm_links.p_nom = p_nom
688
    df_dsm_links.p_max = p_max
689
    df_dsm_links.p_min = p_min
690
691
    df_dsm_links["p_max"] = df_dsm_links["p_max"].apply(lambda x: list(x))
692
    df_dsm_links["p_min"] = df_dsm_links["p_min"].apply(lambda x: list(x))
693
694
    df_dsm_links.reset_index(inplace=True)
695
    df_dsm_links.sort_values("scn_name", inplace=True)
696
697
    # aggregate stores
698
699
    df_dsm_stores["e_max"] = df_dsm_stores["e_max"].apply(
700
        lambda x: np.array(x)
701
    )
702
    df_dsm_stores["e_min"] = df_dsm_stores["e_min"].apply(
703
        lambda x: np.array(x)
704
    )
705
706
    grouper = [df_dsm_stores.original_bus, df_dsm_stores.scn_name]
707
    e_nom = df_dsm_stores.groupby(grouper)["e_nom"].sum()
708
    e_max = df_dsm_stores.groupby(grouper)["e_max"].apply(np.sum)
709
    e_min = df_dsm_stores.groupby(grouper)["e_min"].apply(np.sum)
710
711
    df_dsm_stores = df_dsm_stores.groupby(grouper).first()
712
    df_dsm_stores.e_nom = e_nom
713
    df_dsm_stores.e_max = e_max
714
    df_dsm_stores.e_min = e_min
715
716
    df_dsm_stores["e_max"] = df_dsm_stores["e_max"].apply(lambda x: list(x))
717
    df_dsm_stores["e_min"] = df_dsm_stores["e_min"].apply(lambda x: list(x))
718
719
    df_dsm_stores.reset_index(inplace=True)
720
    df_dsm_stores.sort_values("scn_name", inplace=True)
721
722
    # select new bus_ids for aggregated buses and add to links and stores
723
    bus_id = db.next_etrago_id("Bus") + df_dsm_buses.index
724
725
    df_dsm_buses["bus_id"] = bus_id
726
    df_dsm_links["dsm_bus"] = bus_id
727
    df_dsm_stores["bus"] = bus_id
728
729
    # select new link_ids for aggregated links
730
    link_id = db.next_etrago_id("Link") + df_dsm_links.index
731
732
    df_dsm_links["link_id"] = link_id
733
734
    # select new store_ids to aggregated stores
735
736
    store_id = db.next_etrago_id("Store") + df_dsm_stores.index
737
738
    df_dsm_stores["store_id"] = store_id
739
740
    return df_dsm_buses, df_dsm_links, df_dsm_stores
741
742
743
def data_export(dsm_buses, dsm_links, dsm_stores, carrier):
744
    """
745
    Export new components to database.
746
747
    Parameters
748
    ----------
749
    dsm_buses: DataFrame
750
        Buses representing locations of DSM-potential
751
    dsm_links: DataFrame
752
        Links connecting DSM-buses and DSM-stores
753
    dsm_stores: DataFrame
754
        Stores representing DSM-potential
755
    carrier: String
756
        Remark to be filled in column 'carrier' identifying DSM-potential
757
    """
758
759
    targets = config.datasets()["DSM_CTS_industry"]["targets"]
760
761
    # dsm_buses
762
763
    insert_buses = gpd.GeoDataFrame(
764
        index=dsm_buses.index,
765
        data=dsm_buses["geom"],
766
        geometry="geom",
767
        crs=dsm_buses.crs,
768
    )
769
    insert_buses["scn_name"] = dsm_buses["scn_name"]
770
    insert_buses["bus_id"] = dsm_buses["bus_id"]
771
    insert_buses["v_nom"] = dsm_buses["v_nom"]
772
    insert_buses["carrier"] = carrier
773
    insert_buses["x"] = dsm_buses["x"]
774
    insert_buses["y"] = dsm_buses["y"]
775
776
    # insert into database
777
    insert_buses.to_postgis(
778
        targets["bus"]["table"],
779
        con=db.engine(),
780
        schema=targets["bus"]["schema"],
781
        if_exists="append",
782
        index=False,
783
        dtype={"geom": "geometry"},
784
    )
785
786
    # dsm_links
787
788
    insert_links = pd.DataFrame(index=dsm_links.index)
789
    insert_links["scn_name"] = dsm_links["scn_name"]
790
    insert_links["link_id"] = dsm_links["link_id"]
791
    insert_links["bus0"] = dsm_links["original_bus"]
792
    insert_links["bus1"] = dsm_links["dsm_bus"]
793
    insert_links["carrier"] = carrier
794
    insert_links["p_nom"] = dsm_links["p_nom"]
795
796
    # insert into database
797
    insert_links.to_sql(
798
        targets["link"]["table"],
799
        con=db.engine(),
800
        schema=targets["link"]["schema"],
801
        if_exists="append",
802
        index=False,
803
    )
804
805
    insert_links_timeseries = pd.DataFrame(index=dsm_links.index)
806
    insert_links_timeseries["scn_name"] = dsm_links["scn_name"]
807
    insert_links_timeseries["link_id"] = dsm_links["link_id"]
808
    insert_links_timeseries["p_min_pu"] = dsm_links["p_min"]
809
    insert_links_timeseries["p_max_pu"] = dsm_links["p_max"]
810
    insert_links_timeseries["temp_id"] = 1
811
812
    # insert into database
813
    insert_links_timeseries.to_sql(
814
        targets["link_timeseries"]["table"],
815
        con=db.engine(),
816
        schema=targets["link_timeseries"]["schema"],
817
        if_exists="append",
818
        index=False,
819
    )
820
821
    # dsm_stores
822
823
    insert_stores = pd.DataFrame(index=dsm_stores.index)
824
    insert_stores["scn_name"] = dsm_stores["scn_name"]
825
    insert_stores["store_id"] = dsm_stores["store_id"]
826
    insert_stores["bus"] = dsm_stores["bus"]
827
    insert_stores["carrier"] = carrier
828
    insert_stores["e_nom"] = dsm_stores["e_nom"]
829
830
    # insert into database
831
    insert_stores.to_sql(
832
        targets["store"]["table"],
833
        con=db.engine(),
834
        schema=targets["store"]["schema"],
835
        if_exists="append",
836
        index=False,
837
    )
838
839
    insert_stores_timeseries = pd.DataFrame(index=dsm_stores.index)
840
    insert_stores_timeseries["scn_name"] = dsm_stores["scn_name"]
841
    insert_stores_timeseries["store_id"] = dsm_stores["store_id"]
842
    insert_stores_timeseries["e_min_pu"] = dsm_stores["e_min"]
843
    insert_stores_timeseries["e_max_pu"] = dsm_stores["e_max"]
844
    insert_stores_timeseries["temp_id"] = 1
845
846
    # insert into database
847
    insert_stores_timeseries.to_sql(
848
        targets["store_timeseries"]["table"],
849
        con=db.engine(),
850
        schema=targets["store_timeseries"]["schema"],
851
        if_exists="append",
852
        index=False,
853
    )
854
855
856
def delete_dsm_entries(carrier):
857
    """
858
    Deletes DSM-components from database if they already exist before creating
859
    new ones.
860
861
    Parameters
862
        ----------
863
     carrier: String
864
        Remark in column 'carrier' identifying DSM-potential
865
    """
866
867
    targets = config.datasets()["DSM_CTS_industry"]["targets"]
868
869
    # buses
870
871
    sql = f"""DELETE FROM {targets["bus"]["schema"]}.{targets["bus"]["table"]} b
872
     WHERE (b.carrier LIKE '{carrier}');"""
873
    db.execute_sql(sql)
874
875
    # links
876
877
    sql = f"""
878
        DELETE FROM {targets["link_timeseries"]["schema"]}.
879
        {targets["link_timeseries"]["table"]} t
880
        WHERE t.link_id IN
881
        (
882
            SELECT l.link_id FROM {targets["link"]["schema"]}.
883
            {targets["link"]["table"]} l
884
            WHERE l.carrier LIKE '{carrier}'
885
        );
886
        """
887
888
    db.execute_sql(sql)
889
890
    sql = f"""
891
        DELETE FROM {targets["link"]["schema"]}.
892
        {targets["link"]["table"]} l
893
        WHERE (l.carrier LIKE '{carrier}');
894
        """
895
896
    db.execute_sql(sql)
897
898
    # stores
899
900
    sql = f"""
901
        DELETE FROM {targets["store_timeseries"]["schema"]}.
902
        {targets["store_timeseries"]["table"]} t
903
        WHERE t.store_id IN
904
        (
905
            SELECT s.store_id FROM {targets["store"]["schema"]}.
906
            {targets["store"]["table"]} s
907
            WHERE s.carrier LIKE '{carrier}'
908
        );
909
        """
910
911
    db.execute_sql(sql)
912
913
    sql = f"""
914
        DELETE FROM {targets["store"]["schema"]}.{targets["store"]["table"]} s
915
        WHERE (s.carrier LIKE '{carrier}');
916
        """
917
918
    db.execute_sql(sql)
919
920
921
def dsm_cts_ind(
922
    con=db.engine(),
923
    cts_cool_vent_ac_share=0.22,
924
    ind_vent_cool_share=0.039,
925
    ind_vent_share=0.017,
926
):
927
    """
928
    Execute methodology to create and implement components for DSM considering
929
    a) CTS per osm-area: combined potentials of cooling, ventilation and air
930
      conditioning
931
    b) Industry per osm-are: combined potentials of cooling and ventilation
932
    c) Industrial Sites: potentials of ventilation in sites of
933
      "Wirtschaftszweig" (WZ) 23
934
    d) Industrial Sites: potentials of sites specified by subsectors
935
      identified by Schmidt (https://zenodo.org/record/3613767#.YTsGwVtCRhG):
936
      Paper, Recycled Paper, Pulp, Cement
937
938
    Modelled using the methods by Heitkoetter et. al.:
939
    https://doi.org/10.1016/j.adapen.2020.100001
940
941
    Parameters
942
    ----------
943
    con :
944
        Connection to database
945
    cts_cool_vent_ac_share: float
946
        Share of cooling, ventilation and AC in CTS demand
947
    ind_vent_cool_share: float
948
        Share of cooling and ventilation in industry demand
949
    ind_vent_share: float
950
        Share of ventilation in industry demand in sites of WZ 23
951
952
    """
953
954
    # CTS per osm-area: cooling, ventilation and air conditioning
955
956
    print(" ")
957
    print("CTS per osm-area: cooling, ventilation and air conditioning")
958
    print(" ")
959
960
    dsm = cts_data_import(cts_cool_vent_ac_share)
961
962
    # calculate combined potentials of cooling, ventilation and air
963
    # conditioning in CTS using combined parameters by Heitkoetter et. al.
964
    p_max, p_min, e_max, e_min = calculate_potentials(
965
        s_flex=S_FLEX_CTS,
966
        s_util=S_UTIL_CTS,
967
        s_inc=S_INC_CTS,
968
        s_dec=S_DEC_CTS,
969
        delta_t=DELTA_T_CTS,
970
        dsm=dsm,
971
    )
972
973
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
974
        con, p_max, p_min, e_max, e_min, dsm
975
    )
976
977
    df_dsm_buses = dsm_buses.copy()
978
    df_dsm_links = dsm_links.copy()
979
    df_dsm_stores = dsm_stores.copy()
980
981
    # industry per osm-area: cooling and ventilation
982
983
    print(" ")
984
    print("industry per osm-area: cooling and ventilation")
985
    print(" ")
986
987
    dsm = ind_osm_data_import(ind_vent_cool_share)
988
989
    # calculate combined potentials of cooling and ventilation in industrial
990
    # sector using combined parameters by Heitkoetter et. al.
991
    p_max, p_min, e_max, e_min = calculate_potentials(
992
        s_flex=S_FLEX_OSM,
993
        s_util=S_UTIL_OSM,
994
        s_inc=S_INC_OSM,
995
        s_dec=S_DEC_OSM,
996
        delta_t=DELTA_T_OSM,
997
        dsm=dsm,
998
    )
999
1000
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1001
        con, p_max, p_min, e_max, e_min, dsm
1002
    )
1003
1004
    df_dsm_buses = gpd.GeoDataFrame(
1005
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1006
        crs="EPSG:4326",
1007
    )
1008
    df_dsm_links = pd.DataFrame(
1009
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1010
    )
1011
    df_dsm_stores = pd.DataFrame(
1012
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1013
    )
1014
1015
    # industry sites
1016
1017
    # industry sites: different applications
1018
1019
    dsm = ind_sites_data_import()
1020
1021
    print(" ")
1022
    print("industry sites: paper")
1023
    print(" ")
1024
1025
    dsm_paper = gpd.GeoDataFrame(
1026
        dsm[
1027
            dsm["application"].isin(
1028
                [
1029
                    "Graphic Paper",
1030
                    "Packing Paper and Board",
1031
                    "Hygiene Paper",
1032
                    "Technical/Special Paper and Board",
1033
                ]
1034
            )
1035
        ]
1036
    )
1037
1038
    # calculate potentials of industrial sites with paper-applications
1039
    # using parameters by Heitkoetter et al.
1040
    p_max, p_min, e_max, e_min = calculate_potentials(
1041
        s_flex=S_FLEX_PAPER,
1042
        s_util=S_UTIL_PAPER,
1043
        s_inc=S_INC_PAPER,
1044
        s_dec=S_DEC_PAPER,
1045
        delta_t=DELTA_T_PAPER,
1046
        dsm=dsm_paper,
1047
    )
1048
1049
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1050
        con, p_max, p_min, e_max, e_min, dsm_paper
1051
    )
1052
1053
    df_dsm_buses = gpd.GeoDataFrame(
1054
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1055
        crs="EPSG:4326",
1056
    )
1057
    df_dsm_links = pd.DataFrame(
1058
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1059
    )
1060
    df_dsm_stores = pd.DataFrame(
1061
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1062
    )
1063
1064
    print(" ")
1065
    print("industry sites: recycled paper")
1066
    print(" ")
1067
1068
    # calculate potentials of industrial sites with recycled paper-applications
1069
    # using parameters by Heitkoetter et. al.
1070
    dsm_recycled_paper = gpd.GeoDataFrame(
1071
        dsm[dsm["application"] == "Recycled Paper"]
1072
    )
1073
1074
    p_max, p_min, e_max, e_min = calculate_potentials(
1075
        s_flex=S_FLEX_RECYCLED_PAPER,
1076
        s_util=S_UTIL_RECYCLED_PAPER,
1077
        s_inc=S_INC_RECYCLED_PAPER,
1078
        s_dec=S_DEC_RECYCLED_PAPER,
1079
        delta_t=DELTA_T_RECYCLED_PAPER,
1080
        dsm=dsm_recycled_paper,
1081
    )
1082
1083
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1084
        con, p_max, p_min, e_max, e_min, dsm_recycled_paper
1085
    )
1086
1087
    df_dsm_buses = gpd.GeoDataFrame(
1088
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1089
        crs="EPSG:4326",
1090
    )
1091
    df_dsm_links = pd.DataFrame(
1092
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1093
    )
1094
    df_dsm_stores = pd.DataFrame(
1095
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1096
    )
1097
1098
    print(" ")
1099
    print("industry sites: pulp")
1100
    print(" ")
1101
1102
    dsm_pulp = gpd.GeoDataFrame(dsm[dsm["application"] == "Mechanical Pulp"])
1103
1104
    # calculate potentials of industrial sites with pulp-applications
1105
    # using parameters by Heitkoetter et. al.
1106
    p_max, p_min, e_max, e_min = calculate_potentials(
1107
        s_flex=S_FLEX_PULP,
1108
        s_util=S_UTIL_PULP,
1109
        s_inc=S_INC_PULP,
1110
        s_dec=S_DEC_PULP,
1111
        delta_t=DELTA_T_PULP,
1112
        dsm=dsm_pulp,
1113
    )
1114
1115
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1116
        con, p_max, p_min, e_max, e_min, dsm_pulp
1117
    )
1118
1119
    df_dsm_buses = gpd.GeoDataFrame(
1120
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1121
        crs="EPSG:4326",
1122
    )
1123
    df_dsm_links = pd.DataFrame(
1124
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1125
    )
1126
    df_dsm_stores = pd.DataFrame(
1127
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1128
    )
1129
1130
    # industry sites: cement
1131
1132
    print(" ")
1133
    print("industry sites: cement")
1134
    print(" ")
1135
1136
    dsm_cement = gpd.GeoDataFrame(dsm[dsm["application"] == "Cement Mill"])
1137
1138
    # calculate potentials of industrial sites with cement-applications
1139
    # using parameters by Heitkoetter et al.
1140
    p_max, p_min, e_max, e_min = calculate_potentials(
1141
        s_flex=S_FLEX_CEMENT,
1142
        s_util=S_UTIL_CEMENT,
1143
        s_inc=S_INC_CEMENT,
1144
        s_dec=S_DEC_CEMENT,
1145
        delta_t=DELTA_T_CEMENT,
1146
        dsm=dsm_cement,
1147
    )
1148
1149
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1150
        con, p_max, p_min, e_max, e_min, dsm_cement
1151
    )
1152
1153
    df_dsm_buses = gpd.GeoDataFrame(
1154
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1155
        crs="EPSG:4326",
1156
    )
1157
    df_dsm_links = pd.DataFrame(
1158
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1159
    )
1160
    df_dsm_stores = pd.DataFrame(
1161
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1162
    )
1163
1164
    # industry sites: ventilation in WZ23
1165
1166
    print(" ")
1167
    print("industry sites: ventilation in WZ23")
1168
    print(" ")
1169
1170
    dsm = ind_sites_vent_data_import(ind_vent_share, wz=WZ)
1171
1172
    # drop entries of Cement Mills whose DSM-potentials have already been
1173
    # modelled
1174
    cement = np.unique(dsm_cement["bus"].values)
1175
    index_names = np.array(dsm[dsm["bus"].isin(cement)].index)
1176
    dsm.drop(index_names, inplace=True)
1177
1178
    # calculate potentials of ventialtion in industrial sites of WZ 23
1179
    # using parameters by Heitkoetter et. al.
1180
    p_max, p_min, e_max, e_min = calculate_potentials(
1181
        s_flex=S_FLEX_WZ,
1182
        s_util=S_UTIL_WZ,
1183
        s_inc=S_INC_WZ,
1184
        s_dec=S_DEC_WZ,
1185
        delta_t=DELTA_T_WZ,
1186
        dsm=dsm,
1187
    )
1188
1189
    dsm_buses, dsm_links, dsm_stores = create_dsm_components(
1190
        con, p_max, p_min, e_max, e_min, dsm
1191
    )
1192
1193
    df_dsm_buses = gpd.GeoDataFrame(
1194
        pd.concat([df_dsm_buses, dsm_buses], ignore_index=True),
1195
        crs="EPSG:4326",
1196
    )
1197
    df_dsm_links = pd.DataFrame(
1198
        pd.concat([df_dsm_links, dsm_links], ignore_index=True)
1199
    )
1200
    df_dsm_stores = pd.DataFrame(
1201
        pd.concat([df_dsm_stores, dsm_stores], ignore_index=True)
1202
    )
1203
1204
    # TODO
1205
    #     # aggregate DSM components per substation
1206
    #     dsm_buses, dsm_links, dsm_stores = aggregate_components(
1207
    #         df_dsm_buses, df_dsm_links, df_dsm_stores
1208
    #     )
1209
1210
    #     # export aggregated DSM components to database
1211
1212
    #     delete_dsm_entries("dsm-cts")
1213
    #     delete_dsm_entries("dsm-ind-osm")
1214
    #     delete_dsm_entries("dsm-ind-sites")
1215
    #     delete_dsm_entries("dsm")
1216
1217
    data_export(dsm_buses, dsm_links, dsm_stores, carrier="dsm")
1218
1219
1220
def col_per_unit(lst):
1221
    max_val = max([abs(val) for val in lst])
1222
1223
    return [val / max_val for val in lst]
1224
1225
1226
def calc_per_unit(df):
1227
    for col in ["p_max_pu", "p_min_pu", "e_max_pu", "e_min_pu"]:
1228
        df[col] = df[col].apply(col_per_unit)
1229
1230
    return df
1231
1232
1233
def dsm_cts_ind_individual(
1234
    con=CON,
1235
    cts_cool_vent_ac_share=CTS_COOL_VENT_AC_SHARE,
1236
    ind_vent_cool_share=IND_VENT_COOL_SHARE,
1237
    ind_vent_share=IND_VENT_SHARE,
1238
):
1239
    """
1240
    Execute methodology to create and implement components for DSM considering
1241
    a) CTS per osm-area: combined potentials of cooling, ventilation and air
1242
      conditioning
1243
    b) Industry per osm-are: combined potentials of cooling and ventilation
1244
    c) Industrial Sites: potentials of ventilation in sites of
1245
      "Wirtschaftszweig" (WZ) 23
1246
    d) Industrial Sites: potentials of sites specified by subsectors
1247
      identified by Schmidt (https://zenodo.org/record/3613767#.YTsGwVtCRhG):
1248
      Paper, Recycled Paper, Pulp, Cement
1249
1250
    Modelled using the methods by Heitkoetter et. al.:
1251
    https://doi.org/10.1016/j.adapen.2020.100001
1252
1253
    Parameters
1254
    ----------
1255
    con :
1256
        Connection to database
1257
    cts_cool_vent_ac_share: float
1258
        Share of cooling, ventilation and AC in CTS demand
1259
    ind_vent_cool_share: float
1260
        Share of cooling and ventilation in industry demand
1261
    ind_vent_share: float
1262
        Share of ventilation in industry demand in sites of WZ 23
1263
1264
    """
1265
1266
    # CTS per osm-area: cooling, ventilation and air conditioning
1267
1268
    print(" ")
1269
    print("CTS per osm-area: cooling, ventilation and air conditioning")
1270
    print(" ")
1271
1272
    dsm = cts_data_import(cts_cool_vent_ac_share)
1273
1274
    # calculate combined potentials of cooling, ventilation and air
1275
    # conditioning in CTS using combined parameters by Heitkoetter et. al.
1276
    vals = calculate_potentials(
1277
        s_flex=S_FLEX_CTS,
1278
        s_util=S_UTIL_CTS,
1279
        s_inc=S_INC_CTS,
1280
        s_dec=S_DEC_CTS,
1281
        delta_t=DELTA_T_CTS,
1282
        dsm=dsm,
1283
    )
1284
1285
    # TODO: Werte sind noch nicht p.u.
1286
1287
    base_columns = [
1288
        "bus",
1289
        "scn_name",
1290
        "p_set",
1291
        "p_max_pu",
1292
        "p_min_pu",
1293
        "e_max_pu",
1294
        "e_min_pu",
1295
    ]
1296
1297
    cts_df = pd.concat([dsm, *vals], axis=1, ignore_index=True)
1298
    cts_df.columns = base_columns
1299
    cts_df = calc_per_unit(cts_df)
1300
1301
    print(" ")
1302
    print("industry per osm-area: cooling and ventilation")
1303
    print(" ")
1304
1305
    dsm = ind_osm_data_import_individual(ind_vent_cool_share)
1306
1307
    # calculate combined potentials of cooling and ventilation in industrial
1308
    # sector using combined parameters by Heitkoetter et. al.
1309
    vals = calculate_potentials(
1310
        s_flex=S_FLEX_OSM,
1311
        s_util=S_UTIL_OSM,
1312
        s_inc=S_INC_OSM,
1313
        s_dec=S_DEC_OSM,
1314
        delta_t=DELTA_T_OSM,
1315
        dsm=dsm,
1316
    )
1317
1318
    columns = ["osm_id"] + base_columns
1319
1320
    osm_df = pd.concat([dsm, *vals], axis=1, ignore_index=True)
1321
    osm_df.columns = columns
1322
    osm_df = calc_per_unit(osm_df)
1323
1324
    # industry sites
1325
1326
    # industry sites: different applications
1327
1328
    dsm = ind_sites_data_import()
1329
1330
    print(" ")
1331
    print("industry sites: paper")
1332
    print(" ")
1333
1334
    dsm_paper = gpd.GeoDataFrame(
1335
        dsm[
1336
            dsm["application"].isin(
1337
                [
1338
                    "Graphic Paper",
1339
                    "Packing Paper and Board",
1340
                    "Hygiene Paper",
1341
                    "Technical/Special Paper and Board",
1342
                ]
1343
            )
1344
        ]
1345
    )
1346
1347
    # calculate potentials of industrial sites with paper-applications
1348
    # using parameters by Heitkoetter et al.
1349
    vals = calculate_potentials(
1350
        s_flex=S_FLEX_PAPER,
1351
        s_util=S_UTIL_PAPER,
1352
        s_inc=S_INC_PAPER,
1353
        s_dec=S_DEC_PAPER,
1354
        delta_t=DELTA_T_PAPER,
1355
        dsm=dsm_paper,
1356
    )
1357
1358
    columns = ["application", "id"] + base_columns
1359
1360
    paper_df = pd.concat([dsm_paper, *vals], axis=1, ignore_index=True)
1361
    paper_df.columns = columns
1362
    paper_df = calc_per_unit(paper_df)
1363
1364
    print(" ")
1365
    print("industry sites: recycled paper")
1366
    print(" ")
1367
1368
    # calculate potentials of industrial sites with recycled paper-applications
1369
    # using parameters by Heitkoetter et. al.
1370
    dsm_recycled_paper = gpd.GeoDataFrame(
1371
        dsm[dsm["application"] == "Recycled Paper"]
1372
    )
1373
1374
    vals = calculate_potentials(
1375
        s_flex=S_FLEX_RECYCLED_PAPER,
1376
        s_util=S_UTIL_RECYCLED_PAPER,
1377
        s_inc=S_INC_RECYCLED_PAPER,
1378
        s_dec=S_DEC_RECYCLED_PAPER,
1379
        delta_t=DELTA_T_RECYCLED_PAPER,
1380
        dsm=dsm_recycled_paper,
1381
    )
1382
1383
    recycled_paper_df = pd.concat(
1384
        [dsm_recycled_paper, *vals], axis=1, ignore_index=True
1385
    )
1386
    recycled_paper_df.columns = columns
1387
    recycled_paper_df = calc_per_unit(recycled_paper_df)
1388
1389
    print(" ")
1390
    print("industry sites: pulp")
1391
    print(" ")
1392
1393
    dsm_pulp = gpd.GeoDataFrame(dsm[dsm["application"] == "Mechanical Pulp"])
1394
1395
    # calculate potentials of industrial sites with pulp-applications
1396
    # using parameters by Heitkoetter et. al.
1397
    vals = calculate_potentials(
1398
        s_flex=S_FLEX_PULP,
1399
        s_util=S_UTIL_PULP,
1400
        s_inc=S_INC_PULP,
1401
        s_dec=S_DEC_PULP,
1402
        delta_t=DELTA_T_PULP,
1403
        dsm=dsm_pulp,
1404
    )
1405
1406
    pulp_df = pd.concat([dsm_pulp, *vals], axis=1, ignore_index=True)
1407
    pulp_df.columns = columns
1408
    pulp_df = calc_per_unit(pulp_df)
1409
1410
    # industry sites: cement
1411
1412
    print(" ")
1413
    print("industry sites: cement")
1414
    print(" ")
1415
1416
    dsm_cement = gpd.GeoDataFrame(dsm[dsm["application"] == "Cement Mill"])
1417
1418
    # calculate potentials of industrial sites with cement-applications
1419
    # using parameters by Heitkoetter et al.
1420
    vals = calculate_potentials(
1421
        s_flex=S_FLEX_CEMENT,
1422
        s_util=S_UTIL_CEMENT,
1423
        s_inc=S_INC_CEMENT,
1424
        s_dec=S_DEC_CEMENT,
1425
        delta_t=DELTA_T_CEMENT,
1426
        dsm=dsm_cement,
1427
    )
1428
1429
    cement_df = pd.concat([dsm_cement, *vals], axis=1, ignore_index=True)
1430
    cement_df.columns = columns
1431
    cement_df = calc_per_unit(cement_df)
1432
1433
    # industry sites: ventilation in WZ23
1434
1435
    print(" ")
1436
    print("industry sites: ventilation in WZ23")
1437
    print(" ")
1438
1439
    dsm = ind_sites_vent_data_import_individual(ind_vent_share, wz=WZ)
1440
1441
    # drop entries of Cement Mills whose DSM-potentials have already been
1442
    # modelled
1443
    cement = np.unique(dsm_cement["bus"].values)
1444
    index_names = np.array(dsm[dsm["bus"].isin(cement)].index)
1445
    dsm.drop(index_names, inplace=True)
1446
1447
    # calculate potentials of ventialtion in industrial sites of WZ 23
1448
    # using parameters by Heitkoetter et. al.
1449
    vals = calculate_potentials(
1450
        s_flex=S_FLEX_WZ,
1451
        s_util=S_UTIL_WZ,
1452
        s_inc=S_INC_WZ,
1453
        s_dec=S_DEC_WZ,
1454
        delta_t=DELTA_T_WZ,
1455
        dsm=dsm,
1456
    )
1457
1458
    columns = ["site_id"] + base_columns
1459
1460
    ind_sites_df = pd.concat([dsm, *vals], axis=1, ignore_index=True)
1461
    ind_sites_df.columns = columns
1462
    ind_sites_df = calc_per_unit(ind_sites_df)
1463
1464
    # TODO
1465
1466
1467
def dsm_cts_ind_processing():
1468
    dsm_cts_ind()
1469
1470
    dsm_cts_ind_individual()
1471