Passed
Pull Request — dev (#905)
by
unknown
01:33
created

data.datasets.heat_supply.individual_heating   F

Complexity

Total Complexity 61

Size/Duplication

Total Lines 1355
Duplicated Lines 1.77 %

Importance

Changes 0
Metric Value
wmc 61
eloc 577
dl 24
loc 1355
rs 3.52
c 0
b 0
f 0

31 Functions

Rating   Name   Duplication   Size   Complexity  
A determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_2() 0 2 1
A determine_hp_cap_buildings_eGon100RE() 0 29 1
A delete_peak_loads_if_existing() 0 8 2
A get_zensus_cells_with_decentral_heat_demand_in_mv_grid() 0 63 2
A plot_heat_supply() 24 31 2
A get_profile_ids() 0 33 2
A get_peta_demand() 0 46 3
A timeit() 0 15 1
A adapt_numpy_int64() 0 2 1
A determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_5() 0 2 1
B determine_hp_capacity_eGon2035_pypsa_eur_sec() 0 230 4
A get_daily_demand_share() 0 24 2
A determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_1() 0 2 1
A get_cts_buildings_with_decentral_heat_demand_in_mv_grid() 0 46 2
B cascade_per_technology() 0 114 6
A determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_3() 0 2 1
A timeitlog() 0 23 2
A determine_min_hp_cap_pypsa_eur_sec() 0 29 2
A get_residential_buildings_with_decentral_heat_demand_in_mv_grid() 0 48 2
A determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_4() 0 2 1
B determine_buildings_with_hp_in_mv_grid() 0 92 2
A adapt_numpy_float64() 0 2 1
A get_total_heat_pump_capacity_of_mv_grid() 0 36 2
A determine_minimum_hp_capacity_per_building() 0 24 1
A desaggregate_hp_capacity() 0 33 1
A determine_hp_cap_buildings_eGon2035() 0 44 2
A get_daily_profiles() 0 17 2
A calc_residential_heat_profiles_per_mvgd() 0 86 3
A log_to_file() 0 11 1
A create_peak_load_table() 0 3 1
A cascade_heat_supply_indiv() 0 89 4

2 Methods

Rating   Name   Duplication   Size   Complexity  
A HeatPumps2050.__init__() 0 6 1
A HeatPumpsPypsaEurSecAnd2035.__init__() 0 12 1

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like data.datasets.heat_supply.individual_heating often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
"""The central module containing all code dealing with
2
individual heat supply.
3
4
"""
5
from loguru import logger
6
import numpy as np
7
import pandas as pd
8
import random
9
import saio
10
11
from pathlib import Path
12
import time
13
14
from psycopg2.extensions import AsIs, register_adapter
15
from sqlalchemy import ARRAY, REAL, Column, Integer, String
16
from sqlalchemy.ext.declarative import declarative_base
17
import geopandas as gpd
18
19
20
from egon.data import config, db
21
from egon.data.datasets import Dataset
22
from egon.data.datasets.electricity_demand_timeseries.cts_buildings import (
23
    calc_cts_building_profiles,
24
    CtsBuildings,
25
)
26
from egon.data.datasets.electricity_demand_timeseries.tools import (
27
    write_table_to_postgres,
28
)
29
from egon.data.datasets.heat_demand import EgonPetaHeat
30
from egon.data.datasets.heat_demand_timeseries.daily import (
31
    EgonDailyHeatDemandPerClimateZone,
32
    EgonMapZensusClimateZones,
33
)
34
from egon.data.datasets.heat_demand_timeseries.idp_pool import (
35
    EgonHeatTimeseries,
36
)
37
# get zensus cells with district heating
38
from egon.data.datasets.zensus_mv_grid_districts import MapZensusGridDistricts
39
40
engine = db.engine()
41
Base = declarative_base()
42
43
44
class EgonEtragoTimeseriesIndividualHeating(Base):
45
    __tablename__ = "egon_etrago_timeseries_individual_heating"
46
    __table_args__ = {"schema": "demand"}
47
    bus_id = Column(Integer, primary_key=True)
48
    scenario = Column(String, primary_key=True)
49
    carrier = Column(String, primary_key=True)
50
    dist_aggregated_mw = Column(ARRAY(REAL))
51
52
53
class HeatPumpsPypsaEurSecAnd2035(Dataset):
54
    def __init__(self, dependencies):
55
        super().__init__(
56
            name="HeatPumpsPypsaEurSecAnd2035",
57
            version="0.0.0",
58
            dependencies=dependencies,
59
            tasks=(create_peak_load_table,
60
                   delete_peak_loads_if_existing,
61
                   {determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_1,
62
                    determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_2,
63
                    determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_3,
64
                    determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_4,
65
                    determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_5,
66
                    }
67
                   ),
68
        )
69
70
71
class HeatPumps2050(Dataset):
72
    def __init__(self, dependencies):
73
        super().__init__(
74
            name="HeatPumps2050",
75
            version="0.0.0",
76
            dependencies=dependencies,
77
            tasks=(determine_hp_cap_buildings_eGon100RE,),
78
        )
79
80
81
class BuildingHeatPeakLoads(Base):
82
    __tablename__ = "egon_building_heat_peak_loads"
83
    __table_args__ = {"schema": "demand"}
84
85
    building_id = Column(Integer, primary_key=True)
86
    scenario = Column(String, primary_key=True)
87
    sector = Column(String, primary_key=True)
88
    peak_load_in_w = Column(REAL)
89
90
91
def adapt_numpy_float64(numpy_float64):
92
    return AsIs(numpy_float64)
93
94
95
def adapt_numpy_int64(numpy_int64):
96
    return AsIs(numpy_int64)
97
98
99
def log_to_file(name):
100
    """Simple only file logger"""
101
    logger.remove()
102
    logger.add(
103
        Path(f"{name}.log"),
104
        format="{time} {level} {message}",
105
        # filter="my_module",
106
        level="TRACE",
107
    )
108
    logger.trace("Start trace logging")
109
    return logger
110
111
112
def timeit(func):
113
    """
114
    Decorator for measuring function's running time.
115
    """
116
117
    def measure_time(*args, **kw):
118
        start_time = time.time()
119
        result = func(*args, **kw)
120
        print(
121
            "Processing time of %s(): %.2f seconds."
122
            % (func.__qualname__, time.time() - start_time)
123
        )
124
        return result
125
126
    return measure_time
127
128
129
def timeitlog(func):
130
    """
131
    Decorator for measuring running time of residential heat peak load and
132
    logging it.
133
    """
134
135
    def measure_time(*args, **kw):
136
        start_time = time.time()
137
        result = func(*args, **kw)
138
        process_time = time.time() - start_time
139
        try:
140
            mvgd = kw["mvgd"]
141
        except KeyError:
142
            mvgd = "bulk"
143
        statement = (
144
            f"MVGD={mvgd} | Processing time of {func.__qualname__} | "
145
            f"{time.strftime('%H h, %M min, %S s', time.gmtime(process_time))}"
146
        )
147
        logger.trace(statement)
148
        print(statement)
149
        return result
150
151
    return measure_time
152
153
154
def cascade_per_technology(
155
    heat_per_mv,
156
    technologies,
157
    scenario,
158
    distribution_level,
159
    max_size_individual_chp=0.05,
160
):
161
162
    """Add plants for individual heat.
163
    Currently only on mv grid district level.
164
165
    Parameters
166
    ----------
167
    mv_grid_districts : geopandas.geodataframe.GeoDataFrame
168
        MV grid districts including the heat demand
169
    technologies : pandas.DataFrame
170
        List of supply technologies and their parameters
171
    scenario : str
172
        Name of the scenario
173
    max_size_individual_chp : float
174
        Maximum capacity of an individual chp in MW
175
    Returns
176
    -------
177
    mv_grid_districts : geopandas.geodataframe.GeoDataFrame
178
        MV grid district which need additional individual heat supply
179
    technologies : pandas.DataFrame
180
        List of supply technologies and their parameters
181
    append_df : pandas.DataFrame
182
        List of plants per mv grid for the selected technology
183
184
    """
185
    sources = config.datasets()["heat_supply"]["sources"]
186
187
    tech = technologies[technologies.priority == technologies.priority.max()]
188
189
    # Distribute heat pumps linear to remaining demand.
190
    if tech.index == "heat_pump":
191
192
        if distribution_level == "federal_state":
193
            # Select target values per federal state
194
            target = db.select_dataframe(
195
                f"""
196
                    SELECT DISTINCT ON (gen) gen as state, capacity
197
                    FROM {sources['scenario_capacities']['schema']}.
198
                    {sources['scenario_capacities']['table']} a
199
                    JOIN {sources['federal_states']['schema']}.
200
                    {sources['federal_states']['table']} b
201
                    ON a.nuts = b.nuts
202
                    WHERE scenario_name = '{scenario}'
203
                    AND carrier = 'residential_rural_heat_pump'
204
                    """,
205
                index_col="state",
206
            )
207
208
            heat_per_mv["share"] = heat_per_mv.groupby(
209
                "state"
210
            ).remaining_demand.apply(lambda grp: grp / grp.sum())
211
212
            append_df = (
213
                heat_per_mv["share"]
214
                .mul(target.capacity[heat_per_mv["state"]].values)
215
                .reset_index()
216
            )
217
        else:
218
            # Select target value for Germany
219
            target = db.select_dataframe(
220
                f"""
221
                    SELECT SUM(capacity) AS capacity
222
                    FROM {sources['scenario_capacities']['schema']}.
223
                    {sources['scenario_capacities']['table']} a
224
                    WHERE scenario_name = '{scenario}'
225
                    AND carrier = 'residential_rural_heat_pump'
226
                    """
227
            )
228
229
            heat_per_mv["share"] = (
230
                heat_per_mv.remaining_demand
231
                / heat_per_mv.remaining_demand.sum()
232
            )
233
234
            append_df = (
235
                heat_per_mv["share"].mul(target.capacity[0]).reset_index()
236
            )
237
238
        append_df.rename(
239
            {"bus_id": "mv_grid_id", "share": "capacity"}, axis=1, inplace=True
240
        )
241
242
    elif tech.index == "gas_boiler":
243
244
        append_df = pd.DataFrame(
245
            data={
246
                "capacity": heat_per_mv.remaining_demand.div(
247
                    tech.estimated_flh.values[0]
248
                ),
249
                "carrier": "residential_rural_gas_boiler",
250
                "mv_grid_id": heat_per_mv.index,
251
                "scenario": scenario,
252
            }
253
        )
254
255
    if append_df.size > 0:
0 ignored issues
show
introduced by
The variable append_df does not seem to be defined for all execution paths.
Loading history...
256
        append_df["carrier"] = tech.index[0]
257
        heat_per_mv.loc[
258
            append_df.mv_grid_id, "remaining_demand"
259
        ] -= append_df.set_index("mv_grid_id").capacity.mul(
260
            tech.estimated_flh.values[0]
261
        )
262
263
    heat_per_mv = heat_per_mv[heat_per_mv.remaining_demand >= 0]
264
265
    technologies = technologies.drop(tech.index)
266
267
    return heat_per_mv, technologies, append_df
268
269
270
def cascade_heat_supply_indiv(scenario, distribution_level, plotting=True):
271
    """Assigns supply strategy for individual heating in four steps.
272
273
    1.) all small scale CHP are connected.
274
    2.) If the supply can not  meet the heat demand, solar thermal collectors
275
        are attached. This is not implemented yet, since individual
276
        solar thermal plants are not considered in eGon2035 scenario.
277
    3.) If this is not suitable, the mv grid is also supplied by heat pumps.
278
    4.) The last option are individual gas boilers.
279
280
    Parameters
281
    ----------
282
    scenario : str
283
        Name of scenario
284
    plotting : bool, optional
285
        Choose if individual heating supply is plotted. The default is True.
286
287
    Returns
288
    -------
289
    resulting_capacities : pandas.DataFrame
290
        List of plants per mv grid
291
292
    """
293
294
    sources = config.datasets()["heat_supply"]["sources"]
295
296
    # Select residential heat demand per mv grid district and federal state
297
    heat_per_mv = db.select_geodataframe(
298
        f"""
299
        SELECT d.bus_id as bus_id, SUM(demand) as demand,
300
        c.vg250_lan as state, d.geom
301
        FROM {sources['heat_demand']['schema']}.
302
        {sources['heat_demand']['table']} a
303
        JOIN {sources['map_zensus_grid']['schema']}.
304
        {sources['map_zensus_grid']['table']} b
305
        ON a.zensus_population_id = b.zensus_population_id
306
        JOIN {sources['map_vg250_grid']['schema']}.
307
        {sources['map_vg250_grid']['table']} c
308
        ON b.bus_id = c.bus_id
309
        JOIN {sources['mv_grids']['schema']}.
310
        {sources['mv_grids']['table']} d
311
        ON d.bus_id = c.bus_id
312
        WHERE scenario = '{scenario}'
313
        AND a.zensus_population_id NOT IN (
314
            SELECT zensus_population_id
315
            FROM {sources['map_dh']['schema']}.{sources['map_dh']['table']}
316
            WHERE scenario = '{scenario}')
317
        GROUP BY d.bus_id, vg250_lan, geom
318
        """,
319
        index_col="bus_id",
320
    )
321
322
    # Store geometry of mv grid
323
    geom_mv = heat_per_mv.geom.centroid.copy()
324
325
    # Initalize Dataframe for results
326
    resulting_capacities = pd.DataFrame(
327
        columns=["mv_grid_id", "carrier", "capacity"]
328
    )
329
330
    # Set technology data according to
331
    # http://www.wbzu.de/seminare/infopool/infopool-bhkw
332
    # TODO: Add gas boilers and solar themal (eGon100RE)
333
    technologies = pd.DataFrame(
334
        index=["heat_pump", "gas_boiler"],
335
        columns=["estimated_flh", "priority"],
336
        data={"estimated_flh": [4000, 8000], "priority": [2, 1]},
337
    )
338
339
    # In the beginning, the remaining demand equals demand
340
    heat_per_mv["remaining_demand"] = heat_per_mv["demand"]
341
342
    # Connect new technologies, if there is still heat demand left
343
    while (len(technologies) > 0) and (len(heat_per_mv) > 0):
344
        # Attach new supply technology
345
        heat_per_mv, technologies, append_df = cascade_per_technology(
346
            heat_per_mv, technologies, scenario, distribution_level
347
        )
348
        # Collect resulting capacities
349
        resulting_capacities = resulting_capacities.append(
350
            append_df, ignore_index=True
351
        )
352
353
    if plotting:
354
        plot_heat_supply(resulting_capacities)
355
356
    return gpd.GeoDataFrame(
357
        resulting_capacities,
358
        geometry=geom_mv[resulting_capacities.mv_grid_id].values,
359
    )
360
361
362
# @timeit
363
def get_peta_demand(mvgd):
364
    """only residential"""
365
366
    with db.session_scope() as session:
367
        query = (
368
            session.query(
369
                MapZensusGridDistricts.zensus_population_id,
370
                EgonPetaHeat.demand.label("peta_2035"),
371
            )
372
            .filter(MapZensusGridDistricts.bus_id == mvgd)
373
            .filter(
374
                MapZensusGridDistricts.zensus_population_id
375
                == EgonPetaHeat.zensus_population_id
376
            )
377
            .filter(EgonPetaHeat.scenario == "eGon2035")
378
            .filter(EgonPetaHeat.sector == "residential")
379
        )
380
381
    df_peta_2035 = pd.read_sql(
382
        query.statement, query.session.bind, index_col="zensus_population_id"
383
    )
384
385
    with db.session_scope() as session:
386
        query = (
387
            session.query(
388
                MapZensusGridDistricts.zensus_population_id,
389
                EgonPetaHeat.demand.label("peta_2050"),
390
            )
391
            .filter(MapZensusGridDistricts.bus_id == mvgd)
392
            .filter(
393
                MapZensusGridDistricts.zensus_population_id
394
                == EgonPetaHeat.zensus_population_id
395
            )
396
            .filter(EgonPetaHeat.scenario == "eGon100RE")
397
            .filter(EgonPetaHeat.sector == "residential")
398
        )
399
400
    df_peta_100RE = pd.read_sql(
401
        query.statement, query.session.bind, index_col="zensus_population_id"
402
    )
403
404
    df_peta_demand = pd.concat(
405
        [df_peta_2035, df_peta_100RE], axis=1
406
    ).reset_index()
407
408
    return df_peta_demand
409
410
411
# @timeit
412
def get_profile_ids(mvgd):
413
    with db.session_scope() as session:
414
        query = (
415
            session.query(
416
                MapZensusGridDistricts.zensus_population_id,
417
                EgonHeatTimeseries.building_id,
418
                EgonHeatTimeseries.selected_idp_profiles,
419
            )
420
            .filter(MapZensusGridDistricts.bus_id == mvgd)
421
            .filter(
422
                MapZensusGridDistricts.zensus_population_id
423
                == EgonHeatTimeseries.zensus_population_id
424
            )
425
        )
426
427
    df_profiles_ids = pd.read_sql(
428
        query.statement, query.session.bind, index_col=None
429
    )
430
    # Add building count per cell
431
    df_profiles_ids = pd.merge(
432
        left=df_profiles_ids,
433
        right=df_profiles_ids.groupby("zensus_population_id")["building_id"]
434
        .count()
435
        .rename("buildings"),
436
        left_on="zensus_population_id",
437
        right_index=True,
438
    )
439
440
    df_profiles_ids = df_profiles_ids.explode("selected_idp_profiles")
441
    df_profiles_ids["day_of_year"] = (
442
        df_profiles_ids.groupby("building_id").cumcount() + 1
443
    )
444
    return df_profiles_ids
445
446
447
# @timeit
448
def get_daily_profiles(profile_ids):
449
    saio.register_schema("demand", db.engine())
450
    from saio.demand import egon_heat_idp_pool
451
452
    with db.session_scope() as session:
453
        query = session.query(egon_heat_idp_pool).filter(
454
            egon_heat_idp_pool.index.in_(profile_ids)
455
        )
456
457
    df_profiles = pd.read_sql(
458
        query.statement, query.session.bind, index_col="index"
459
    )
460
461
    df_profiles = df_profiles.explode("idp")
462
    df_profiles["hour"] = df_profiles.groupby(axis=0, level=0).cumcount() + 1
463
464
    return df_profiles
465
466
467
# @timeit
468
def get_daily_demand_share(mvgd):
469
470
    with db.session_scope() as session:
471
        query = (
472
            session.query(
473
                MapZensusGridDistricts.zensus_population_id,
474
                EgonDailyHeatDemandPerClimateZone.day_of_year,
475
                EgonDailyHeatDemandPerClimateZone.daily_demand_share,
476
            )
477
            .filter(
478
                EgonMapZensusClimateZones.climate_zone
479
                == EgonDailyHeatDemandPerClimateZone.climate_zone
480
            )
481
            .filter(
482
                MapZensusGridDistricts.zensus_population_id
483
                == EgonMapZensusClimateZones.zensus_population_id
484
            )
485
            .filter(MapZensusGridDistricts.bus_id == mvgd)
486
        )
487
488
    df_daily_demand_share = pd.read_sql(
489
        query.statement, query.session.bind, index_col=None
490
    )
491
    return df_daily_demand_share
492
493
494
@timeitlog
495
def calc_residential_heat_profiles_per_mvgd(mvgd):
496
    """
497
    Gets residential heat profiles per building in MV grid for both eGon2035 and
498
    eGon100RE scenario.
499
500
    Parameters
501
    ----------
502
    mvgd : int
503
        MV grid ID.
504
505
    Returns
506
    --------
507
    pd.DataFrame
508
        Heat demand profiles of buildings. Columns are:
509
            * zensus_population_id : int
510
                Zensus cell ID building is in.
511
            * building_id : int
512
                ID of building.
513
            * day_of_year : int
514
                Day of the year (1 - 365).
515
            * hour : int
516
                Hour of the day (1 - 24).
517
            * eGon2035 : float
518
                Building's residential heat demand in MW, for specified hour of the
519
                year (specified through columns `day_of_year` and `hour`).
520
            * eGon100RE : float
521
                Building's residential heat demand in MW, for specified hour of the
522
                year (specified through columns `day_of_year` and `hour`).
523
524
    """
525
    df_peta_demand = get_peta_demand(mvgd)
526
527
    if df_peta_demand.empty:
528
        return None
529
530
    df_profiles_ids = get_profile_ids(mvgd)
531
532
    if df_profiles_ids.empty:
533
        return None
534
535
    df_profiles = get_daily_profiles(
536
        df_profiles_ids["selected_idp_profiles"].unique()
537
    )
538
539
    df_daily_demand_share = get_daily_demand_share(mvgd)
540
541
    # Merge profile ids to peta demand by zensus_population_id
542
    df_profile_merge = pd.merge(
543
        left=df_peta_demand, right=df_profiles_ids, on="zensus_population_id"
544
    )
545
546
    # Merge daily demand to daily profile ids by zensus_population_id and day
547
    df_profile_merge = pd.merge(
548
        left=df_profile_merge,
549
        right=df_daily_demand_share,
550
        on=["zensus_population_id", "day_of_year"],
551
    )
552
553
    # Merge daily profiles by profile id
554
    df_profile_merge = pd.merge(
555
        left=df_profile_merge,
556
        right=df_profiles[["idp", "hour"]],
557
        left_on="selected_idp_profiles",
558
        right_index=True,
559
    )
560
561
    # Scale profiles
562
    df_profile_merge["eGon2035"] = (
563
        df_profile_merge["idp"]
564
        .mul(df_profile_merge["daily_demand_share"])
565
        .mul(df_profile_merge["peta_2035"])
566
        .div(df_profile_merge["buildings"])
567
    )
568
569
    df_profile_merge["eGon100RE"] = (
570
        df_profile_merge["idp"]
571
        .mul(df_profile_merge["daily_demand_share"])
572
        .mul(df_profile_merge["peta_2050"])
573
        .div(df_profile_merge["buildings"])
574
    )
575
576
    columns = ["zensus_population_id", "building_id", "day_of_year", "hour",
577
               "eGon2035", "eGon100RE"]
578
579
    return df_profile_merge.loc[:, columns]
580
581
582 View Code Duplication
def plot_heat_supply(resulting_capacities):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
583
584
    from matplotlib import pyplot as plt
585
586
    mv_grids = db.select_geodataframe(
587
        """
588
        SELECT * FROM grid.egon_mv_grid_district
589
        """,
590
        index_col="bus_id",
591
    )
592
593
    for c in ["CHP", "heat_pump"]:
594
        mv_grids[c] = (
595
            resulting_capacities[resulting_capacities.carrier == c]
596
            .set_index("mv_grid_id")
597
            .capacity
598
        )
599
600
        fig, ax = plt.subplots(1, 1)
601
        mv_grids.boundary.plot(linewidth=0.2, ax=ax, color="black")
602
        mv_grids.plot(
603
            ax=ax,
604
            column=c,
605
            cmap="magma_r",
606
            legend=True,
607
            legend_kwds={
608
                "label": f"Installed {c} in MW",
609
                "orientation": "vertical",
610
            },
611
        )
612
        plt.savefig(f"plots/individual_heat_supply_{c}.png", dpi=300)
613
614
615
@timeit
616
def get_zensus_cells_with_decentral_heat_demand_in_mv_grid(
617
    scenario, mv_grid_id):
618
    """
619
    Returns zensus cell IDs with decentral heating systems in given MV grid.
620
621
    As cells with district heating differ between scenarios, this is also
622
    depending on the scenario.
623
624
    Parameters
625
    -----------
626
    scenario : str
627
        Name of scenario. Can be either "eGon2035" or "eGon100RE".
628
    mv_grid_id : int
629
        ID of MV grid.
630
631
    Returns
632
    --------
633
    pd.Index(int)
634
        Zensus cell IDs (as int) of buildings with decentral heating systems in given
635
        MV grid. Type is pandas Index to avoid errors later on when it is
636
        used in a query.
637
638
    """
639
640
    # get zensus cells in grid
641
    zensus_population_ids = db.select_dataframe(
642
        f"""
643
        SELECT zensus_population_id
644
        FROM boundaries.egon_map_zensus_grid_districts
645
        WHERE bus_id = {mv_grid_id}
646
        """,
647
        index_col=None,
648
    ).zensus_population_id.values
649
650
    # convert to pd.Index (otherwise type is np.int64, which will for some
651
    # reason throw an error when used in a query)
652
    zensus_population_ids = pd.Index(zensus_population_ids)
653
654
    # get zensus cells with district heating
655
    from egon.data.datasets.district_heating_areas import (
656
        MapZensusDistrictHeatingAreas,
657
    )
658
659
    with db.session_scope() as session:
660
        query = session.query(
661
            MapZensusDistrictHeatingAreas.zensus_population_id,
662
        ).filter(
663
            MapZensusDistrictHeatingAreas.scenario == scenario,
664
            MapZensusDistrictHeatingAreas.zensus_population_id.in_(
665
                zensus_population_ids
666
            ),
667
        )
668
669
    cells_with_dh = pd.read_sql(
670
        query.statement, query.session.bind, index_col=None
671
    ).zensus_population_id.values
672
673
    # remove zensus cells with district heating
674
    zensus_population_ids = zensus_population_ids.drop(
675
        cells_with_dh, errors="ignore"
676
    )
677
    return zensus_population_ids
678
679
680
@timeit
681
def get_residential_buildings_with_decentral_heat_demand_in_mv_grid(
682
    scenario, mv_grid_id):
683
    """
684
    Returns building IDs of buildings with decentral residential heat demand in
685
    given MV grid.
686
687
    As cells with district heating differ between scenarios, this is also
688
    depending on the scenario.
689
690
    Parameters
691
    -----------
692
    scenario : str
693
        Name of scenario. Can be either "eGon2035" or "eGon100RE".
694
    mv_grid_id : int
695
        ID of MV grid.
696
697
    Returns
698
    --------
699
    pd.Index(int)
700
        Building IDs (as int) of buildings with decentral heating system in given
701
        MV grid. Type is pandas Index to avoid errors later on when it is
702
        used in a query.
703
704
    """
705
    # get zensus cells with decentral heating
706
    zensus_population_ids = get_zensus_cells_with_decentral_heat_demand_in_mv_grid(
707
        scenario, mv_grid_id)
708
709
    # get buildings with decentral heat demand
710
    engine = db.engine()
711
    saio.register_schema("demand", engine)
712
    from saio.demand import egon_heat_timeseries_selected_profiles
713
714
    with db.session_scope() as session:
715
        query = session.query(
716
            egon_heat_timeseries_selected_profiles.building_id,
717
        ).filter(
718
            egon_heat_timeseries_selected_profiles.zensus_population_id.in_(
719
                zensus_population_ids
720
            )
721
        )
722
723
    buildings_with_heat_demand = pd.read_sql(
724
        query.statement, query.session.bind, index_col=None
725
    ).building_id.values
726
727
    return pd.Index(buildings_with_heat_demand)
728
729
730
@timeit
731
def get_cts_buildings_with_decentral_heat_demand_in_mv_grid(
732
    scenario, mv_grid_id):
733
    """
734
    Returns building IDs of buildings with decentral CTS heat demand in
735
    given MV grid.
736
737
    As cells with district heating differ between scenarios, this is also
738
    depending on the scenario.
739
740
    Parameters
741
    -----------
742
    scenario : str
743
        Name of scenario. Can be either "eGon2035" or "eGon100RE".
744
    mv_grid_id : int
745
        ID of MV grid.
746
747
    Returns
748
    --------
749
    pd.Index(int)
750
        Building IDs (as int) of buildings with decentral heating system in given
751
        MV grid. Type is pandas Index to avoid errors later on when it is
752
        used in a query.
753
754
    """
755
756
    # get zensus cells with decentral heating
757
    zensus_population_ids = get_zensus_cells_with_decentral_heat_demand_in_mv_grid(
758
        scenario, mv_grid_id)
759
760
    # get buildings with decentral heat demand
761
    # ToDo @Julian, sind das alle CTS buildings in der Tabelle?
762
    with db.session_scope() as session:
763
        query = session.query(
764
            CtsBuildings.id,
765
        ).filter(
766
            CtsBuildings.zensus_population_id.in_(
767
                zensus_population_ids
768
            )
769
        )
770
771
    buildings_with_heat_demand = pd.read_sql(
772
        query.statement, query.session.bind, index_col=None
773
    ).id.values
774
775
    return pd.Index(buildings_with_heat_demand)
776
777
778
def get_total_heat_pump_capacity_of_mv_grid(scenario, mv_grid_id):
779
    """
780
    Returns total heat pump capacity per grid that was previously defined
781
    (by NEP or pypsa-eur-sec).
782
783
    Parameters
784
    -----------
785
    scenario : str
786
        Name of scenario. Can be either "eGon2035" or "eGon100RE".
787
    mv_grid_id : int
788
        ID of MV grid.
789
790
    Returns
791
    --------
792
    float
793
        Total heat pump capacity in MW in given MV grid.
794
795
    """
796
    from egon.data.datasets.heat_supply import EgonIndividualHeatingSupply
797
798
    with db.session_scope() as session:
799
        query = (
800
            session.query(
801
                EgonIndividualHeatingSupply.mv_grid_id,
802
                EgonIndividualHeatingSupply.capacity,
803
            )
804
            .filter(EgonIndividualHeatingSupply.scenario == scenario)
805
            .filter(EgonIndividualHeatingSupply.carrier == "heat_pump")
806
            .filter(EgonIndividualHeatingSupply.mv_grid_id == mv_grid_id)
807
        )
808
809
    hp_cap_mv_grid = pd.read_sql(
810
        query.statement, query.session.bind, index_col="mv_grid_id"
811
    ).capacity.values[0]
812
813
    return hp_cap_mv_grid
814
815
816
def determine_minimum_hp_capacity_per_building(
817
    peak_heat_demand, flexibility_factor=24 / 18, cop=1.7
818
):
819
    """
820
    Determines minimum required heat pump capacity.
821
822
    Parameters
823
    ----------
824
    peak_heat_demand : pd.Series
825
        Series with peak heat demand per building in MW. Index contains the
826
        building ID.
827
    flexibility_factor : float
828
        Factor to overdimension the heat pump to allow for some flexible
829
        dispatch in times of high heat demand. Per default, a factor of 24/18
830
        is used, to take into account
831
832
    Returns
833
    -------
834
    pd.Series
835
        Pandas series with minimum required heat pump capacity per building in
836
        MW.
837
838
    """
839
    return peak_heat_demand * flexibility_factor / cop
840
841
842
def determine_buildings_with_hp_in_mv_grid(
843
    hp_cap_mv_grid, min_hp_cap_per_building
844
):
845
    """
846
    Distributes given total heat pump capacity to buildings based on their peak
847
    heat demand.
848
849
    Parameters
850
    -----------
851
    hp_cap_mv_grid : float
852
        Total heat pump capacity in MW in given MV grid.
853
    min_hp_cap_per_building : pd.Series
854
        Pandas series with minimum required heat pump capacity per building
855
         in MW.
856
857
    Returns
858
    -------
859
    pd.Index(int)
860
        Building IDs (as int) of buildings to get heat demand time series for.
861
862
    """
863
    building_ids = min_hp_cap_per_building.index
864
865
    # get buildings with PV to give them a higher priority when selecting
866
    # buildings a heat pump will be allocated to
867
    engine = db.engine()
868
    saio.register_schema("supply", engine)
869
    # TODO Adhoc Pv rooftop fix
870
    # from saio.supply import egon_power_plants_pv_roof_building
871
    #
872
    # with db.session_scope() as session:
873
    #     query = session.query(
874
    #         egon_power_plants_pv_roof_building.building_id
875
    #     ).filter(
876
    #         egon_power_plants_pv_roof_building.building_id.in_(building_ids)
877
    #     )
878
    #
879
    # buildings_with_pv = pd.read_sql(
880
    #     query.statement, query.session.bind, index_col=None
881
    # ).building_id.values
882
    buildings_with_pv = []
883
    # set different weights for buildings with PV and without PV
884
    weight_with_pv = 1.5
885
    weight_without_pv = 1.0
886
    weights = pd.concat(
887
        [
888
            pd.DataFrame(
889
                {"weight": weight_without_pv},
890
                index=building_ids.drop(buildings_with_pv, errors="ignore"),
891
            ),
892
            pd.DataFrame({"weight": weight_with_pv}, index=buildings_with_pv),
893
        ]
894
    )
895
    # normalise weights (probability needs to add up to 1)
896
    weights.weight = weights.weight / weights.weight.sum()
897
898
    # get random order at which buildings are chosen
899
    np.random.seed(db.credentials()["--random-seed"])
900
    buildings_with_hp_order = np.random.choice(
901
        weights.index,
902
        size=len(weights),
903
        replace=False,
904
        p=weights.weight.values,
905
    )
906
907
    # select buildings until HP capacity in MV grid is reached (some rest
908
    # capacity will remain)
909
    hp_cumsum = min_hp_cap_per_building.loc[buildings_with_hp_order].cumsum()
910
    buildings_with_hp = hp_cumsum[hp_cumsum <= hp_cap_mv_grid].index
911
912
    # choose random heat pumps until remaining heat pumps are larger than remaining
913
    # heat pump capacity
914
    remaining_hp_cap = (
915
        hp_cap_mv_grid - min_hp_cap_per_building.loc[buildings_with_hp].sum())
916
    min_cap_buildings_wo_hp = min_hp_cap_per_building.loc[
917
        building_ids.drop(buildings_with_hp)]
918
    possible_buildings = min_cap_buildings_wo_hp[
919
        min_cap_buildings_wo_hp <= remaining_hp_cap].index
920
    while len(possible_buildings) > 0:
921
        random.seed(db.credentials()["--random-seed"])
922
        new_hp_building = random.choice(possible_buildings)
923
        # add new building to building with HP
924
        buildings_with_hp = buildings_with_hp.append(pd.Index([new_hp_building]))
925
        # determine if there are still possible buildings
926
        remaining_hp_cap = (
927
            hp_cap_mv_grid - min_hp_cap_per_building.loc[buildings_with_hp].sum())
928
        min_cap_buildings_wo_hp = min_hp_cap_per_building.loc[
929
            building_ids.drop(buildings_with_hp)]
930
        possible_buildings = min_cap_buildings_wo_hp[
931
            min_cap_buildings_wo_hp <= remaining_hp_cap].index
932
933
    return buildings_with_hp
934
935
936
def desaggregate_hp_capacity(min_hp_cap_per_building, hp_cap_mv_grid):
937
    """
938
    Desaggregates the required total heat pump capacity to buildings.
939
940
    All buildings are previously assigned a minimum required heat pump
941
    capacity. If the total heat pump capacity exceeds this, larger heat pumps
942
    are assigned.
943
944
    Parameters
945
    ------------
946
    min_hp_cap_per_building : pd.Series
947
        Pandas series with minimum required heat pump capacity per building
948
         in MW.
949
    hp_cap_mv_grid : float
950
        Total heat pump capacity in MW in given MV grid.
951
952
    Returns
953
    --------
954
    pd.Series
955
        Pandas series with heat pump capacity per building in MW.
956
957
    """
958
    # distribute remaining capacity to all buildings with HP depending on
959
    # installed HP capacity
960
961
    allocated_cap = min_hp_cap_per_building.sum()
962
    remaining_cap = hp_cap_mv_grid - allocated_cap
963
964
    fac = remaining_cap / allocated_cap
965
    hp_cap_per_building = (
966
        min_hp_cap_per_building * fac + min_hp_cap_per_building
967
    )
968
    return hp_cap_per_building
969
970
971
def determine_min_hp_cap_pypsa_eur_sec(peak_heat_demand, building_ids):
972
    """
973
    Determines minimum required HP capacity in MV grid in MW as input for
974
    pypsa-eur-sec.
975
976
    Parameters
977
    ----------
978
    peak_heat_demand : pd.Series
979
        Series with peak heat demand per building in MW. Index contains the
980
        building ID.
981
    building_ids : pd.Index(int)
982
        Building IDs (as int) of buildings with decentral heating system in given
983
        MV grid.
984
985
    Returns
986
    --------
987
    float
988
        Minimum required HP capacity in MV grid in MW.
989
990
    """
991
    if len(building_ids) > 0:
992
        peak_heat_demand = peak_heat_demand.loc[building_ids]
993
        # determine minimum required heat pump capacity per building
994
        min_hp_cap_buildings = determine_minimum_hp_capacity_per_building(
995
            peak_heat_demand
996
        )
997
        return min_hp_cap_buildings.sum()
998
    else:
999
        return 0.0
1000
1001
1002
def determine_hp_cap_buildings_eGon2035(mv_grid_id, peak_heat_demand, building_ids):
1003
    """
1004
    Determines which buildings in the MV grid will have a HP (buildings with PV
1005
    rooftop are more likely to be assigned) in the eGon2035 scenario, as well as
1006
    their respective HP capacity in MW.
1007
1008
    Parameters
1009
    -----------
1010
    mv_grid_id : int
1011
        ID of MV grid.
1012
    peak_heat_demand : pd.Series
1013
        Series with peak heat demand per building in MW. Index contains the
1014
        building ID.
1015
    building_ids : pd.Index(int)
1016
        Building IDs (as int) of buildings with decentral heating system in
1017
        given MV grid.
1018
1019
    """
1020
1021
    if len(building_ids) > 0:
1022
        peak_heat_demand = peak_heat_demand.loc[building_ids]
1023
1024
        # determine minimum required heat pump capacity per building
1025
        min_hp_cap_buildings = determine_minimum_hp_capacity_per_building(
1026
            peak_heat_demand
1027
        )
1028
1029
        # select buildings that will have a heat pump
1030
        hp_cap_grid = get_total_heat_pump_capacity_of_mv_grid(
1031
            "eGon2035", mv_grid_id
1032
        )
1033
        buildings_with_hp = determine_buildings_with_hp_in_mv_grid(
1034
            hp_cap_grid, min_hp_cap_buildings
1035
        )
1036
1037
        # distribute total heat pump capacity to all buildings with HP
1038
        hp_cap_per_building = desaggregate_hp_capacity(
1039
            min_hp_cap_buildings.loc[buildings_with_hp], hp_cap_grid
1040
        )
1041
1042
        return hp_cap_per_building
1043
1044
    else:
1045
        return pd.Series()
1046
1047
1048
def determine_hp_cap_buildings_eGon100RE(mv_grid_id):
1049
    """
1050
    Main function to determine HP capacity per building in eGon100RE scenario.
1051
1052
    In eGon100RE scenario all buildings without district heating get a heat pump.
1053
1054
    """
1055
1056
    # determine minimum required heat pump capacity per building
1057
    building_ids = get_buildings_with_decentral_heat_demand_in_mv_grid(
0 ignored issues
show
Comprehensibility Best Practice introduced by
The variable get_buildings_with_decen..._heat_demand_in_mv_grid does not seem to be defined.
Loading history...
1058
        "eGon100RE", mv_grid_id
1059
    )
1060
1061
    # TODO get peak demand from db
1062
    peak_heat_demand = get_peak_demand_per_building(
0 ignored issues
show
Comprehensibility Best Practice introduced by
The variable get_peak_demand_per_building does not seem to be defined.
Loading history...
1063
        "eGon100RE", building_ids
1064
    )
1065
1066
    # determine minimum required heat pump capacity per building
1067
    min_hp_cap_buildings = determine_minimum_hp_capacity_per_building(
1068
        peak_heat_demand, flexibility_factor=24 / 18, cop=1.7
1069
    )
1070
1071
    # distribute total heat pump capacity to all buildings with HP
1072
    hp_cap_grid = get_total_heat_pump_capacity_of_mv_grid(
1073
        "eGon100RE", mv_grid_id
1074
    )
1075
    hp_cap_per_building = desaggregate_hp_capacity(
1076
        min_hp_cap_buildings, hp_cap_grid
1077
    )
1078
1079
    # ToDo Julian Write desaggregated HP capacity to table (same as for 2035 scenario)
1080
1081
1082
@timeitlog
1083
def determine_hp_capacity_eGon2035_pypsa_eur_sec(n, max_n=5):
1084
    """
1085
    Main function to determine HP capacity per building in eGon2035 scenario and
1086
    minimum required HP capacity in MV for pypsa-eur-sec.
1087
    Further, creates heat demand time series for all buildings with heat pumps
1088
    (in eGon2035 and eGon100RE scenario) in MV grid, as well as for all buildings
1089
    with gas boilers (only in eGon2035scenario), used in eTraGo.
1090
1091
    Parameters
1092
    -----------
1093
    n : int
1094
        Number between [1;max_n].
1095
    max_n : int
1096
        Maximum number of bulks (MV grid sets run in parallel).
1097
1098
    """
1099
1100
    # ========== Register np datatypes with SQLA ==========
1101
    register_adapter(np.float64, adapt_numpy_float64)
1102
    register_adapter(np.int64, adapt_numpy_int64)
1103
    # =====================================================
1104
1105
    log_to_file(determine_hp_capacity_eGon2035_pypsa_eur_sec.__qualname__ + f"_{n}")
1106
    if n == 0:
1107
        raise KeyError("n >= 1")
1108
1109
    with db.session_scope() as session:
1110
        query = (
1111
            session.query(
1112
                MapZensusGridDistricts.bus_id,
1113
            )
1114
            .filter(
1115
                MapZensusGridDistricts.zensus_population_id
1116
                == EgonPetaHeat.zensus_population_id
1117
            )
1118
            .distinct(MapZensusGridDistricts.bus_id)
1119
        )
1120
    mvgd_ids = pd.read_sql(query.statement, query.session.bind, index_col=None)
1121
1122
    mvgd_ids = mvgd_ids.sort_values("bus_id").reset_index(drop=True)
1123
1124
    mvgd_ids = np.array_split(mvgd_ids["bus_id"].values, max_n)
1125
1126
    # TODO mvgd_ids = [kleines mvgd]
1127
    for mvgd in [1556]: #mvgd_ids[n - 1]:
1128
1129
        logger.trace(f"MVGD={mvgd} | Start")
1130
1131
        # ############### get residential heat demand profiles ###############
1132
        df_heat_ts = calc_residential_heat_profiles_per_mvgd(
1133
            mvgd=mvgd
1134
        )
1135
1136
        # pivot to allow aggregation with CTS profiles
1137
        df_heat_ts_2035 = df_heat_ts.loc[
1138
                          :, ["building_id", "day_of_year", "hour", "eGon2035"]]
1139
        df_heat_ts_2035 = df_heat_ts_2035.pivot(
1140
            index=["day_of_year", "hour"],
1141
            columns="building_id",
1142
            values="eGon2035",
1143
        )
1144
        df_heat_ts_2035 = df_heat_ts_2035.sort_index().reset_index(drop=True)
1145
1146
        df_heat_ts_100RE = df_heat_ts.loc[
1147
                          :, ["building_id", "day_of_year", "hour", "eGon100RE"]]
1148
        df_heat_ts_100RE = df_heat_ts_100RE.pivot(
1149
            index=["day_of_year", "hour"],
1150
            columns="building_id",
1151
            values="eGon100RE",
1152
        )
1153
        df_heat_ts_100RE = df_heat_ts_100RE.sort_index().reset_index(drop=True)
1154
1155
        del df_heat_ts
1156
1157
        # ############### get CTS heat demand profiles ###############
1158
        heat_demand_cts_ts_2035 = calc_cts_building_profiles(
1159
            bus_ids=[mvgd],
1160
            scenario="eGon2035",
1161
            sector="heat",
1162
        )
1163
        heat_demand_cts_ts_100RE = calc_cts_building_profiles(
1164
            bus_ids=[mvgd],
1165
            scenario="eGon100RE",
1166
            sector="heat",
1167
        )
1168
1169
        # ############# aggregate residential and CTS demand profiles #############
1170
        df_heat_ts_2035 = pd.concat(
1171
            [df_heat_ts_2035, heat_demand_cts_ts_2035], axis=1
1172
        )
1173
        df_heat_ts_2035 = df_heat_ts_2035.groupby(axis=1, level=0).sum()
1174
1175
        df_heat_ts_100RE = pd.concat(
1176
            [df_heat_ts_100RE, heat_demand_cts_ts_100RE], axis=1
1177
        )
1178
        df_heat_ts_100RE = df_heat_ts_100RE.groupby(axis=1, level=0).sum()
1179
1180
        del heat_demand_cts_ts_2035, heat_demand_cts_ts_100RE
1181
1182
        # ##################### export peak loads to DB ###################
1183
1184
        df_peak_loads_2035 = df_heat_ts_2035.max()
1185
        df_peak_loads_100RE = df_heat_ts_100RE.max()
1186
1187
        df_peak_loads_db_2035 = df_peak_loads_2035.reset_index().melt(
1188
            id_vars="building_id",
1189
            var_name="scenario",
1190
            value_name="peak_load_in_w",
1191
        )
1192
        df_peak_loads_db_2035["scenario"] = "eGon2035"
1193
        df_peak_loads_db_100RE = df_peak_loads_100RE.reset_index().melt(
1194
            id_vars="building_id",
1195
            var_name="scenario",
1196
            value_name="peak_load_in_w",
1197
        )
1198
        df_peak_loads_db_100RE["scenario"] = "eGon100RE"
1199
        df_peak_loads_db = pd.concat(
1200
            [df_peak_loads_db_2035, df_peak_loads_db_100RE])
1201
1202
        del df_peak_loads_db_2035, df_peak_loads_db_100RE
1203
1204
        df_peak_loads_db["sector"] = "residential+CTS"
1205
        # From MW to W
1206
        df_peak_loads_db["peak_load_in_w"] = df_peak_loads_db["peak_load_in_w"] * 1e6
1207
1208
        logger.trace(f"MVGD={mvgd} | Export to DB")
1209
1210
        # TODO export peak loads all buildings both scenarios to db
1211
        # write_table_to_postgres(
1212
        #     df_peak_loads_db, BuildingHeatPeakLoads, engine=engine
1213
        # )
1214
        # logger.trace(f"MVGD={mvgd} | Done")
1215
1216
        # ######## determine HP capacity for NEP scenario and pypsa-eur-sec ##########
1217
1218
        # get residential buildings with decentral heating systems in both scenarios
1219
        buildings_decentral_heating_2035_res = (
1220
            get_residential_buildings_with_decentral_heat_demand_in_mv_grid(
1221
                "eGon2035", mvgd
1222
            )
1223
        )
1224
        buildings_decentral_heating_100RE_res = (
1225
            get_residential_buildings_with_decentral_heat_demand_in_mv_grid(
1226
                "eGon100RE", mvgd
1227
            )
1228
        )
1229
1230
        # get CTS buildings with decentral heating systems in both scenarios
1231
        buildings_decentral_heating_2035_cts = (
1232
            get_cts_buildings_with_decentral_heat_demand_in_mv_grid(
1233
                "eGon2035", mvgd
1234
            )
1235
        )
1236
        buildings_decentral_heating_100RE_cts = (
1237
            get_cts_buildings_with_decentral_heat_demand_in_mv_grid(
1238
                "eGon100RE", mvgd
1239
            )
1240
        )
1241
1242
        # merge residential and CTS buildings
1243
        buildings_decentral_heating_2035 = (
1244
            buildings_decentral_heating_2035_res.append(
1245
                buildings_decentral_heating_2035_cts
1246
            ).unique()
1247
        )
1248
        buildings_decentral_heating_100RE = (
1249
            buildings_decentral_heating_100RE_res.append(
1250
                buildings_decentral_heating_100RE_cts
1251
            ).unique()
1252
        )
1253
1254
        # determine HP capacity per building for NEP2035 scenario
1255
        hp_cap_per_building_2035 = determine_hp_cap_buildings_eGon2035(
1256
            mvgd, df_peak_loads_2035, buildings_decentral_heating_2035)
1257
        buildings_hp_2035 = hp_cap_per_building_2035.index
1258
        buildings_gas_2035 = pd.Index(buildings_decentral_heating_2035).drop(
1259
            buildings_hp_2035)
1260
1261
        # determine minimum HP capacity per building for pypsa-eur-sec
1262
        hp_min_cap_mv_grid_pypsa_eur_sec = determine_min_hp_cap_pypsa_eur_sec(
1263
            df_peak_loads_100RE, buildings_decentral_heating_100RE)
1264
1265
        # ######################## write HP capacities to DB ######################
1266
1267
        # ToDo Julian Write HP capacity per building in 2035 (hp_cap_per_building_2035) to
1268
        #  db table - neue Tabelle egon_hp_capacity_buildings
1269
1270
        # ToDo Julian Write minimum required capacity in pypsa-eur-sec
1271
        #  (hp_min_cap_mv_grid_pypsa_eur_sec) to
1272
        #  csv for pypsa-eur-sec input - im working directory gibt es directory
1273
        #  input_pypsa_eur_sec - minimum_hp_capacity_mv_grid.csv
1274
1275
        # ################ write aggregated heat profiles to DB ###################
1276
1277
        # heat demand time series for buildings with heat pumps
1278
1279
        # ToDo Julian Write aggregated heat demand time series of buildings with HP to
1280
        #  table to be used in eTraGo - egon_etrago_timeseries_individual_heating
1281
        # TODO Clara uses this table already
1282
        #     but will not need it anymore for eTraGo
1283
        # EgonEtragoTimeseriesIndividualHeating
1284
        df_heat_ts_2035.loc[:, buildings_hp_2035].sum(axis=1) # carrier heat_pump
1285
        df_heat_ts_100RE.loc[:, buildings_decentral_heating_100RE].sum(axis=1) # carrier heat_pump
1286
1287
        # Change format
1288
        # ToDo Julian
1289
        # data = CTS_grid.drop(columns="scenario")
1290
        # df_etrago_cts_heat_profiles = pd.DataFrame(
1291
        #     index=data.index, columns=["scn_name", "p_set"]
1292
        # )
1293
        # df_etrago_cts_heat_profiles.p_set = data.values.tolist()
1294
        # df_etrago_cts_heat_profiles.scn_name = CTS_grid["scenario"]
1295
        # df_etrago_cts_heat_profiles.reset_index(inplace=True)
1296
1297
        # # Drop and recreate Table if exists
1298
        # EgonEtragoTimeseriesIndividualHeating.__table__.drop(bind=db.engine(),
1299
        #                                                      checkfirst=True)
1300
        # EgonEtragoTimeseriesIndividualHeating.__table__.create(bind=db.engine(),
1301
        #                                                        checkfirst=True)
1302
        #
1303
        # # Write heat ts into db
1304
        # with db.session_scope() as session:
1305
        #     session.bulk_insert_mappings(
1306
        #         EgonEtragoTimeseriesIndividualHeating,
1307
        #         df_etrago_cts_heat_profiles.to_dict(orient="records"),
1308
        #     )
1309
1310
        # heat demand time series for buildings with gas boilers (only 2035 scenario)
1311
        df_heat_ts_2035.loc[:, buildings_gas_2035].sum(axis=1) # carrier gas_boilers
1312
        # ToDo Julian Write heat demand time series for buildings with gas boiler to
1313
        #  database - in gleiche Tabelle wie Zeitreihen für WP Gebäude, falls Clara
1314
        #  nichts anderes sagt; wird später weiter aggregiert nach gas voronoi
1315
        #  (grid.egon_gas_voronoi mit carrier CH4) von Clara oder Amélia
1316
1317
1318
def determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_1():
1319
    determine_hp_capacity_eGon2035_pypsa_eur_sec(1, max_n=5)
1320
1321
1322
def determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_2():
1323
    determine_hp_capacity_eGon2035_pypsa_eur_sec(2, max_n=5)
1324
1325
1326
def determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_3():
1327
    determine_hp_capacity_eGon2035_pypsa_eur_sec(3, max_n=5)
1328
1329
1330
def determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_4():
1331
    determine_hp_capacity_eGon2035_pypsa_eur_sec(4, max_n=5)
1332
1333
1334
def determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_5():
1335
    determine_hp_capacity_eGon2035_pypsa_eur_sec(5, max_n=5)
1336
1337
1338
def create_peak_load_table():
1339
1340
    BuildingHeatPeakLoads.__table__.create(bind=engine, checkfirst=True)
1341
1342
1343
def delete_peak_loads_if_existing():
1344
    """Remove all entries"""
1345
1346
    with db.session_scope() as session:
1347
        # Buses
1348
        session.query(BuildingHeatPeakLoads).filter(
1349
            BuildingHeatPeakLoads.sector == "residential"
1350
        ).delete(synchronize_session=False)
1351
1352
1353
if __name__ == "__main__":
1354
    determine_hp_capacity_eGon2035_pypsa_eur_sec_bulk_1()
1355