Passed
Pull Request — dev (#934)
by
unknown
01:43
created

data.datasets.electricity_demand_timeseries.cts_buildings   D

Complexity

Total Complexity 58

Size/Duplication

Total Lines 1512
Duplicated Lines 14.55 %

Importance

Changes 0
Metric Value
wmc 58
eloc 754
dl 220
loc 1512
rs 4.406
c 0
b 0
f 0

17 Functions

Rating   Name   Duplication   Size   Complexity  
A place_buildings_with_amenities() 0 41 3
B create_synthetic_buildings() 0 66 6
A delete_synthetic_cts_buildings() 0 14 2
A remove_double_bus_id() 0 43 3
C calc_cts_building_profiles() 0 119 8
A amenities_without_buildings() 0 41 2
A select_cts_buildings() 0 25 1
B cells_with_cts_demand_only() 0 72 2
A calc_building_demand_profile_share() 0 81 2
C buildings_with_amenities() 0 183 5
A cts_electricity() 37 37 2
B calc_census_cell_share() 0 67 5
B cts_buildings() 0 188 2
B buildings_without_amenities() 0 99 2
A cts_heat() 37 37 2
B get_cts_electricity_peak_load() 72 72 5
B get_cts_heat_peak_load() 74 74 5

1 Method

Rating   Name   Duplication   Size   Complexity  
A CtsDemandBuildings.__init__() 0 9 1

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like data.datasets.electricity_demand_timeseries.cts_buildings often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
"""
2
CTS electricity and heat demand time series for scenarios in 2035 and 2050
3
assigned to OSM-buildings.
4
5
Disaggregation of cts heat & electricity demand time series from MV Substation
6
to census cells via annual demand and then to OSM buildings via
7
amenity tags or randomly if no sufficient OSM-data is available in the
8
respective census cell. If no OSM-buildings or synthetic residential buildings
9
are available new synthetic 5x5m buildings are generated.
10
11
The resulting data is stored in separate tables
12
13
* `openstreetmap.osm_buildings_synthetic`:
14
    Lists generated synthetic building with id, zensus_population_id and
15
    building type. This table is already created within
16
    :func:`hh_buildings.map_houseprofiles_to_buildings()`
17
* `openstreetmap.egon_cts_buildings`:
18
    Table of all selected cts buildings with id, census cell id, geometry and
19
    amenity count in building. This table is created within
20
    :func:`cts_buildings()`
21
* `demand.egon_cts_electricity_demand_building_share`:
22
    Table including the mv substation electricity profile share of all selected
23
    cts buildings for scenario eGon2035 and eGon100RE. This table is created
24
    within :func:`cts_electricity()`
25
* `demand.egon_cts_heat_demand_building_share`:
26
    Table including the mv substation heat profile share of all selected
27
    cts buildings for scenario eGon2035 and eGon100RE. This table is created
28
    within :func:`cts_heat()`
29
* `demand.egon_building_peak_loads`:
30
    Mapping of demand time series and buildings including cell_id, building
31
    area and peak load. This table is already created within
32
    :func:`hh_buildings.get_building_peak_loads()`
33
34
**The following datasets from the database are mainly used for creation:**
35
36
* `openstreetmap.osm_buildings_filtered`:
37
    Table of OSM-buildings filtered by tags to selecting residential and cts
38
    buildings only.
39
* `openstreetmap.osm_amenities_shops_filtered`:
40
    Table of OSM-amenities filtered by tags to select cts only.
41
* `openstreetmap.osm_amenities_not_in_buildings_filtered`:
42
    Table of amenities which do not intersect with any building from
43
    `openstreetmap.osm_buildings_filtered`
44
* `openstreetmap.osm_buildings_synthetic`:
45
    Table of synthetic residential buildings
46
* `boundaries.egon_map_zensus_buildings_filtered_all`:
47
    Mapping table of census cells and buildings filtered even if population
48
    in census cell = 0.
49
* `demand.egon_demandregio_zensus_electricity`:
50
    Table of annual electricity load demand for residential and cts at census
51
    cell level. Residential load demand is derived from aggregated residential
52
    building profiles. DemandRegio CTS load demand at NUTS3 is distributed to
53
    census cells linearly to heat demand from peta5.
54
* `demand.egon_peta_heat`:
55
    Table of annual heat load demand for residential and cts at census cell
56
    level from peta5.
57
* `demand.egon_etrago_electricity_cts`:
58
    Scaled cts electricity time series for every MV substation. Derived from
59
    DemandRegio SLP for selected economic sectors at nuts3. Scaled with annual
60
    demand from `demand.egon_demandregio_zensus_electricity`
61
* `demand.egon_etrago_heat_cts`:
62
    Scaled cts heat time series for every MV substation. Derived from
63
    DemandRegio SLP Gas for selected economic sectors at nuts3. Scaled with
64
    annual demand from `demand.egon_peta_heat`.
65
66
**What is the goal?**
67
68
To disaggregate cts heat and electricity time series from MV substation level
69
to geo-referenced buildings. DemandRegio and Peta5 is used to identify census
70
cells with load demand. Openstreetmap data is used and filtered via tags to
71
identify buildings and count amenities within. The number of amenities serve
72
to assign the appropriate load demand share to the building.
73
74
**What is the challenge?**
75
76
The OSM, DemandRegio and Peta5 dataset differ from each other. The OSM dataset
77
is a community based dataset which is extended throughout and does not claim to
78
be complete. Therefore not all census cells which have a demand assigned by
79
DemandRegio or Peta5 methodology also have buildings with respective tags or no
80
buildings at all. Merging these datasets inconsistencies need
81
to be addressed. For example: not yet tagged buildings or amenities in OSM
82
83
**How are these datasets combined?**
84
85
------>>>>>> continue
86
87
Firstly, all cts buildings are selected. Buildings which have cts amenities
88
inside.
89
90
91
**What are central assumptions during the data processing?**
92
93
* Mapping census to OSM data is not trivial. Discrepancies are substituted.
94
* Missing OSM buildings are generated by census building count.
95
* If no census building count data is available, the number of buildings is
96
derived by an average rate of households/buildings applied to the number of
97
households.
98
99
**Drawbacks and limitations of the data**
100
101
* Missing OSM buildings in cells without census building count are derived by
102
an average rate of households/buildings applied to the number of households.
103
As only whole houses can exist, the substitute is ceiled to the next higher
104
integer. Ceiling is applied to avoid rounding to amount of 0 buildings.
105
106
* As this datasets is a cascade after profile assignement at census cells
107
also check drawbacks and limitations in hh_profiles.py.
108
109
110
111
Example Query
112
-----
113
114
115
Notes
116
-----
117
118
This module docstring is rather a dataset documentation. Once, a decision
119
is made in ... the content of this module docstring needs to be moved to
120
docs attribute of the respective dataset class.
121
"""
122
123
from geoalchemy2 import Geometry
124
from geoalchemy2.shape import to_shape
125
from sqlalchemy import REAL, Column, Integer, String, func
126
from sqlalchemy.ext.declarative import declarative_base
127
import geopandas as gpd
128
import numpy as np
129
import pandas as pd
130
import saio
131
132
from egon.data import db
133
from egon.data import logger as log
134
from egon.data.datasets import Dataset
135
from egon.data.datasets.electricity_demand import (
136
    EgonDemandRegioZensusElectricity,
137
)
138
from egon.data.datasets.electricity_demand.temporal import (
139
    EgonEtragoElectricityCts,
140
)
141
from egon.data.datasets.electricity_demand_timeseries.hh_buildings import (
142
    BuildingElectricityPeakLoads,
143
    OsmBuildingsSynthetic,
144
)
145
from egon.data.datasets.electricity_demand_timeseries.tools import (
146
    random_ints_until_sum,
147
    random_point_in_square,
148
    specific_int_until_sum,
149
    write_table_to_postgis,
150
    write_table_to_postgres,
151
)
152
from egon.data.datasets.heat_demand import EgonPetaHeat
153
from egon.data.datasets.heat_demand_timeseries import EgonEtragoHeatCts
154
from egon.data.datasets.zensus_mv_grid_districts import MapZensusGridDistricts
155
from egon.data.datasets.zensus_vg250 import DestatisZensusPopulationPerHa
156
157
engine = db.engine()
158
Base = declarative_base()
159
160
# import db tables
161
saio.register_schema("openstreetmap", engine=engine)
162
saio.register_schema("boundaries", engine=engine)
163
164
165
class EgonCtsElectricityDemandBuildingShare(Base):
166
    __tablename__ = "egon_cts_electricity_demand_building_share"
167
    __table_args__ = {"schema": "demand"}
168
169
    building_id = Column(Integer, primary_key=True)
170
    scenario = Column(String, primary_key=True)
171
    bus_id = Column(Integer, index=True)
172
    profile_share = Column(REAL)
173
174
175
class EgonCtsHeatDemandBuildingShare(Base):
176
    __tablename__ = "egon_cts_heat_demand_building_share"
177
    __table_args__ = {"schema": "demand"}
178
179
    building_id = Column(Integer, primary_key=True)
180
    scenario = Column(String, primary_key=True)
181
    bus_id = Column(Integer, index=True)
182
    profile_share = Column(REAL)
183
184
185
class CtsBuildings(Base):
186
    __tablename__ = "egon_cts_buildings"
187
    __table_args__ = {"schema": "openstreetmap"}
188
189
    serial = Column(Integer, primary_key=True)
190
    id = Column(Integer, index=True)
191
    zensus_population_id = Column(Integer, index=True)
192
    geom_building = Column(Geometry("Polygon", 3035))
193
    n_amenities_inside = Column(Integer)
194
    source = Column(String)
195
196
197
class BuildingHeatPeakLoads(Base):
198
    __tablename__ = "egon_building_heat_peak_loads"
199
    __table_args__ = {"schema": "demand"}
200
201
    building_id = Column(Integer, primary_key=True)
202
    scenario = Column(String, primary_key=True)
203
    sector = Column(String, primary_key=True)
204
    peak_load_in_w = Column(REAL)
205
206
207
class CtsDemandBuildings(Dataset):
208
    def __init__(self, dependencies):
209
        super().__init__(
210
            name="CtsDemandBuildings",
211
            version="0.0.0",
212
            dependencies=dependencies,
213
            tasks=(
214
                cts_buildings,
215
                {cts_electricity, cts_heat},
216
                {get_cts_electricity_peak_load, get_cts_heat_peak_load},
217
            ),
218
        )
219
220
221
def amenities_without_buildings():
222
    """
223
    Amenities which have no buildings assigned and are in
224
    a cell with cts demand are determined.
225
226
    Returns
227
    -------
228
    pd.DataFrame
229
        Table of amenities without buildings
230
    """
231
    from saio.openstreetmap import osm_amenities_not_in_buildings_filtered
232
233
    with db.session_scope() as session:
234
        cells_query = (
235
            session.query(
236
                DestatisZensusPopulationPerHa.id.label("zensus_population_id"),
237
                osm_amenities_not_in_buildings_filtered.geom_amenity,
238
                osm_amenities_not_in_buildings_filtered.egon_amenity_id,
239
            )
240
            .filter(
241
                func.st_within(
242
                    osm_amenities_not_in_buildings_filtered.geom_amenity,
243
                    DestatisZensusPopulationPerHa.geom,
244
                )
245
            )
246
            .filter(
247
                DestatisZensusPopulationPerHa.id
248
                == EgonDemandRegioZensusElectricity.zensus_population_id
249
            )
250
            .filter(
251
                EgonDemandRegioZensusElectricity.sector == "service",
252
                EgonDemandRegioZensusElectricity.scenario == "eGon2035",
253
            )
254
        )
255
256
    df_amenities_without_buildings = gpd.read_postgis(
257
        cells_query.statement,
258
        cells_query.session.bind,
259
        geom_col="geom_amenity",
260
    )
261
    return df_amenities_without_buildings
262
263
264
def place_buildings_with_amenities(df, amenities=None, max_amenities=None):
265
    """
266
    Building centroids are placed randomly within census cells.
267
    The Number of buildings is derived from n_amenity_inside, the selected
268
    method and number of amenities per building.
269
270
    Returns
271
    -------
272
    df: gpd.GeoDataFrame
273
        Table of buildings centroids
274
    """
275
    if isinstance(max_amenities, int):
276
        # amount of amenities is randomly generated within bounds
277
        # (max_amenities, amenities per cell)
278
        df["n_amenities_inside"] = df["n_amenities_inside"].apply(
279
            random_ints_until_sum, args=[max_amenities]
280
        )
281
    if isinstance(amenities, int):
282
        # Specific amount of amenities per building
283
        df["n_amenities_inside"] = df["n_amenities_inside"].apply(
284
            specific_int_until_sum, args=[amenities]
285
        )
286
287
    # Unnest each building
288
    df = df.explode(column="n_amenities_inside")
289
290
    # building count per cell
291
    df["building_count"] = df.groupby(["zensus_population_id"]).cumcount() + 1
292
293
    # generate random synthetic buildings
294
    edge_length = 5
295
    # create random points within census cells
296
    points = random_point_in_square(geom=df["geom"], tol=edge_length / 2)
297
298
    df.reset_index(drop=True, inplace=True)
299
    # Store center of polygon
300
    df["geom_point"] = points
301
    # Drop geometry of census cell
302
    df = df.drop(columns=["geom"])
303
304
    return df
305
306
307
def create_synthetic_buildings(df, points=None, crs="EPSG:3035"):
308
    """
309
    Synthetic buildings are generated around points.
310
311
    Parameters
312
    ----------
313
    df: pd.DataFrame
314
        Table of census cells
315
    points: gpd.GeoSeries or str
316
        List of points to place buildings around or column name of df
317
    crs: str
318
        CRS of result table
319
320
    Returns
321
    -------
322
    df: gpd.GeoDataFrame
323
        Synthetic buildings
324
    """
325
326
    if isinstance(points, str) and points in df.columns:
327
        points = df[points]
328
    elif isinstance(points, gpd.GeoSeries):
329
        pass
330
    else:
331
        raise ValueError("Points are of the wrong type")
332
333
    # Create building using a square around point
334
    edge_length = 5
335
    df["geom_building"] = points.buffer(distance=edge_length / 2, cap_style=3)
336
337
    if "geom_point" not in df.columns:
338
        df["geom_point"] = df["geom_building"].centroid
339
340
    df = gpd.GeoDataFrame(
341
        df,
342
        crs=crs,
343
        geometry="geom_building",
344
    )
345
346
    # TODO remove after #772 implementation of egon_building_id
347
    df.rename(columns={"id": "egon_building_id"}, inplace=True)
348
349
    # get max number of building ids from synthetic residential table
350
    with db.session_scope() as session:
351
        max_synth_residential_id = session.execute(
352
            func.max(OsmBuildingsSynthetic.id)
353
        ).scalar()
354
    max_synth_residential_id = int(max_synth_residential_id)
355
356
    # create sequential ids
357
    df["egon_building_id"] = range(
358
        max_synth_residential_id + 1,
359
        max_synth_residential_id + df.shape[0] + 1,
360
    )
361
362
    df["area"] = df["geom_building"].area
363
    # set building type of synthetic building
364
    df["building"] = "cts"
365
    # TODO remove in #772
366
    df = df.rename(
367
        columns={
368
            # "zensus_population_id": "cell_id",
369
            "egon_building_id": "id",
370
        }
371
    )
372
    return df
373
374
375
def buildings_with_amenities():
376
    """
377
    Amenities which are assigned to buildings are determined
378
    and grouped per building and zensus cell. Buildings
379
    covering multiple cells therefore exists multiple times
380
    but in different zensus cells. This is necessary to cover
381
    all cells with a cts demand. If buildings exist in multiple
382
    substations, their amenities are summed and assigned and kept in
383
    one substation only. If as a result, a census cell is uncovered,
384
    a synthetic amenity is placed. The buildings are aggregated
385
    afterwards during the calculation of the profile_share.
386
387
    Returns
388
    -------
389
    df_buildings_with_amenities: gpd.GeoDataFrame
390
        Contains all buildings with amenities per zensus cell.
391
    df_lost_cells: gpd.GeoDataFrame
392
        Contains synthetic amenities in lost cells. Might be empty
393
    """
394
395
    from saio.openstreetmap import osm_amenities_in_buildings_filtered
396
397
    with db.session_scope() as session:
398
        cells_query = (
399
            session.query(
400
                osm_amenities_in_buildings_filtered,
401
                MapZensusGridDistricts.bus_id,
402
            )
403
            .filter(
404
                MapZensusGridDistricts.zensus_population_id
405
                == osm_amenities_in_buildings_filtered.zensus_population_id
406
            )
407
            .filter(
408
                EgonDemandRegioZensusElectricity.zensus_population_id
409
                == osm_amenities_in_buildings_filtered.zensus_population_id
410
            )
411
            .filter(
412
                EgonDemandRegioZensusElectricity.sector == "service",
413
                EgonDemandRegioZensusElectricity.scenario == "eGon2035",
414
            )
415
        )
416
    df_amenities_in_buildings = pd.read_sql(
417
        cells_query.statement, cells_query.session.bind, index_col=None
418
    )
419
420
    df_amenities_in_buildings["geom_building"] = df_amenities_in_buildings[
421
        "geom_building"
422
    ].apply(to_shape)
423
    df_amenities_in_buildings["geom_amenity"] = df_amenities_in_buildings[
424
        "geom_amenity"
425
    ].apply(to_shape)
426
427
    df_amenities_in_buildings["n_amenities_inside"] = 1
428
429
    # add identifier column for buildings in multiple substations
430
    df_amenities_in_buildings[
431
        "duplicate_identifier"
432
    ] = df_amenities_in_buildings.groupby(["id", "bus_id"])[
433
        "n_amenities_inside"
434
    ].transform(
435
        "cumsum"
436
    )
437
    df_amenities_in_buildings = df_amenities_in_buildings.sort_values(
438
        ["id", "duplicate_identifier"]
439
    )
440
    # sum amenities of buildings with multiple substations
441
    df_amenities_in_buildings[
442
        "n_amenities_inside"
443
    ] = df_amenities_in_buildings.groupby(["id", "duplicate_identifier"])[
444
        "n_amenities_inside"
445
    ].transform(
446
        "sum"
447
    )
448
449
    # create column to always go for bus_id with max amenities
450
    df_amenities_in_buildings[
451
        "max_amenities"
452
    ] = df_amenities_in_buildings.groupby(["id", "bus_id"])[
453
        "n_amenities_inside"
454
    ].transform(
455
        "sum"
456
    )
457
    # sort to go for
458
    df_amenities_in_buildings.sort_values(
459
        ["id", "max_amenities"], ascending=False, inplace=True
460
    )
461
462
    # identify lost zensus cells
463
    df_lost_cells = df_amenities_in_buildings.loc[
464
        df_amenities_in_buildings.duplicated(
465
            subset=["id", "duplicate_identifier"], keep="first"
466
        )
467
    ]
468
    df_lost_cells.drop_duplicates(
469
        subset=["zensus_population_id"], inplace=True
470
    )
471
472
    # drop buildings with multiple substation and lower max amenity
473
    df_amenities_in_buildings.drop_duplicates(
474
        subset=["id", "duplicate_identifier"], keep="first", inplace=True
475
    )
476
477
    # check if lost zensus cells are already covered
478
    if not df_lost_cells.empty:
479
        if not (
480
            df_amenities_in_buildings["zensus_population_id"]
481
            .isin(df_lost_cells["zensus_population_id"])
482
            .empty
483
        ):
484
            # query geom data for cell if not
485
            with db.session_scope() as session:
486
                cells_query = session.query(
487
                    DestatisZensusPopulationPerHa.id,
488
                    DestatisZensusPopulationPerHa.geom,
489
                ).filter(
490
                    DestatisZensusPopulationPerHa.id.in_(
491
                        df_lost_cells["zensus_population_id"]
492
                    )
493
                )
494
495
            df_lost_cells = gpd.read_postgis(
496
                cells_query.statement,
497
                cells_query.session.bind,
498
                geom_col="geom",
499
            )
500
            # TODO maybe adapt method
501
            # place random amenity in cell
502
            df_lost_cells["n_amenities_inside"] = 1
503
            df_lost_cells.rename(
504
                columns={
505
                    "id": "zensus_population_id",
506
                },
507
                inplace=True,
508
            )
509
            df_lost_cells = place_buildings_with_amenities(
510
                df_lost_cells, amenities=1
511
            )
512
            df_lost_cells.rename(
513
                columns={
514
                    # "id": "zensus_population_id",
515
                    "geom_point": "geom_amenity",
516
                },
517
                inplace=True,
518
            )
519
            df_lost_cells.drop(
520
                columns=["building_count", "n_amenities_inside"], inplace=True
521
            )
522
        else:
523
            df_lost_cells = None
524
    else:
525
        df_lost_cells = None
526
527
    # drop helper columns
528
    df_amenities_in_buildings.drop(
529
        columns=["duplicate_identifier"], inplace=True
530
    )
531
532
    # sum amenities per building and cell
533
    df_amenities_in_buildings[
534
        "n_amenities_inside"
535
    ] = df_amenities_in_buildings.groupby(["zensus_population_id", "id"])[
536
        "n_amenities_inside"
537
    ].transform(
538
        "sum"
539
    )
540
    # drop duplicated buildings
541
    df_buildings_with_amenities = df_amenities_in_buildings.drop_duplicates(
542
        ["id", "zensus_population_id"]
543
    )
544
    df_buildings_with_amenities.reset_index(inplace=True, drop=True)
545
546
    df_buildings_with_amenities = df_buildings_with_amenities[
547
        ["id", "zensus_population_id", "geom_building", "n_amenities_inside"]
548
    ]
549
    df_buildings_with_amenities.rename(
550
        columns={
551
            # "zensus_population_id": "cell_id",
552
            "egon_building_id": "id"
553
        },
554
        inplace=True,
555
    )
556
557
    return df_buildings_with_amenities, df_lost_cells
558
559
560
def buildings_without_amenities():
561
    """
562
    Buildings (filtered and synthetic) in cells with
563
    cts demand but no amenities are determined.
564
565
    Returns
566
    -------
567
    df_buildings_without_amenities: gpd.GeoDataFrame
568
        Table of buildings without amenities in zensus cells
569
        with cts demand.
570
    """
571
    from saio.boundaries import egon_map_zensus_buildings_filtered_all
572
    from saio.openstreetmap import (
573
        osm_amenities_shops_filtered,
574
        osm_buildings_filtered,
575
        osm_buildings_synthetic,
576
    )
577
578
    # buildings_filtered in cts-demand-cells without amenities
579
    with db.session_scope() as session:
580
581
        # Synthetic Buildings
582
        q_synth_buildings = session.query(
583
            osm_buildings_synthetic.cell_id.cast(Integer).label(
584
                "zensus_population_id"
585
            ),
586
            osm_buildings_synthetic.id.cast(Integer).label("id"),
587
            osm_buildings_synthetic.area.label("area"),
588
            osm_buildings_synthetic.geom_building.label("geom_building"),
589
            osm_buildings_synthetic.geom_point.label("geom_point"),
590
        )
591
592
        # Buildings filtered
593
        q_buildings_filtered = session.query(
594
            egon_map_zensus_buildings_filtered_all.zensus_population_id,
595
            osm_buildings_filtered.id,
596
            osm_buildings_filtered.area,
597
            osm_buildings_filtered.geom_building,
598
            osm_buildings_filtered.geom_point,
599
        ).filter(
600
            osm_buildings_filtered.id
601
            == egon_map_zensus_buildings_filtered_all.id
602
        )
603
604
        # Amenities + zensus_population_id
605
        q_amenities = (
606
            session.query(
607
                DestatisZensusPopulationPerHa.id.label("zensus_population_id"),
608
            )
609
            .filter(
610
                func.st_within(
611
                    osm_amenities_shops_filtered.geom_amenity,
612
                    DestatisZensusPopulationPerHa.geom,
613
                )
614
            )
615
            .distinct(DestatisZensusPopulationPerHa.id)
616
        )
617
618
        # Cells with CTS demand but without amenities
619
        q_cts_without_amenities = (
620
            session.query(
621
                EgonDemandRegioZensusElectricity.zensus_population_id,
622
            )
623
            .filter(
624
                EgonDemandRegioZensusElectricity.sector == "service",
625
                EgonDemandRegioZensusElectricity.scenario == "eGon2035",
626
            )
627
            .filter(
628
                EgonDemandRegioZensusElectricity.zensus_population_id.notin_(
629
                    q_amenities
630
                )
631
            )
632
            .distinct()
633
        )
634
635
        # Buildings filtered + synthetic buildings residential in
636
        # cells with CTS demand but without amenities
637
        cells_query = q_synth_buildings.union(q_buildings_filtered).filter(
638
            egon_map_zensus_buildings_filtered_all.zensus_population_id.in_(
639
                q_cts_without_amenities
640
            )
641
        )
642
643
    # df_buildings_without_amenities = pd.read_sql(
644
    #     cells_query.statement, cells_query.session.bind, index_col=None)
645
    df_buildings_without_amenities = gpd.read_postgis(
646
        cells_query.statement,
647
        cells_query.session.bind,
648
        geom_col="geom_building",
649
    )
650
651
    df_buildings_without_amenities = df_buildings_without_amenities.rename(
652
        columns={
653
            # "zensus_population_id": "cell_id",
654
            "egon_building_id": "id",
655
        }
656
    )
657
658
    return df_buildings_without_amenities
659
660
661
def select_cts_buildings(df_buildings_wo_amenities, max_n):
662
    """
663
    N Buildings (filtered and synthetic) in each cell with
664
    cts demand are selected. Only the first n buildings
665
    are taken for each cell. The buildings are sorted by surface
666
    area.
667
668
    Returns
669
    -------
670
    df_buildings_with_cts_demand: gpd.GeoDataFrame
671
        Table of buildings
672
    """
673
674
    df_buildings_wo_amenities.sort_values(
675
        "area", ascending=False, inplace=True
676
    )
677
    # select first n ids each census cell if available
678
    df_buildings_with_cts_demand = (
679
        df_buildings_wo_amenities.groupby("zensus_population_id")
680
        .nth(list(range(max_n)))
681
        .reset_index()
682
    )
683
    df_buildings_with_cts_demand.reset_index(drop=True, inplace=True)
684
685
    return df_buildings_with_cts_demand
686
687
688
def cells_with_cts_demand_only(df_buildings_without_amenities):
689
    """
690
    Cells with cts demand but no amenities or buildilngs
691
    are determined.
692
693
    Returns
694
    -------
695
    df_cells_only_cts_demand: gpd.GeoDataFrame
696
        Table of cells with cts demand but no amenities or buildings
697
    """
698
    from saio.openstreetmap import osm_amenities_shops_filtered
699
700
    # cells mit amenities
701
    with db.session_scope() as session:
702
        sub_query = (
703
            session.query(
704
                DestatisZensusPopulationPerHa.id.label("zensus_population_id"),
705
            )
706
            .filter(
707
                func.st_within(
708
                    osm_amenities_shops_filtered.geom_amenity,
709
                    DestatisZensusPopulationPerHa.geom,
710
                )
711
            )
712
            .distinct(DestatisZensusPopulationPerHa.id)
713
        )
714
715
        cells_query = (
716
            session.query(
717
                EgonDemandRegioZensusElectricity.zensus_population_id,
718
                EgonDemandRegioZensusElectricity.scenario,
719
                EgonDemandRegioZensusElectricity.sector,
720
                EgonDemandRegioZensusElectricity.demand,
721
                DestatisZensusPopulationPerHa.geom,
722
            )
723
            .filter(
724
                EgonDemandRegioZensusElectricity.sector == "service",
725
                EgonDemandRegioZensusElectricity.scenario == "eGon2035",
726
            )
727
            .filter(
728
                EgonDemandRegioZensusElectricity.zensus_population_id.notin_(
729
                    sub_query
730
                )
731
            )
732
            .filter(
733
                EgonDemandRegioZensusElectricity.zensus_population_id
734
                == DestatisZensusPopulationPerHa.id
735
            )
736
        )
737
738
    df_cts_cell_without_amenities = gpd.read_postgis(
739
        cells_query.statement,
740
        cells_query.session.bind,
741
        geom_col="geom",
742
        index_col=None,
743
    )
744
745
    # TODO maybe remove
746
    df_buildings_without_amenities = df_buildings_without_amenities.rename(
747
        columns={"cell_id": "zensus_population_id"}
748
    )
749
750
    # Census cells with only cts demand
751
    df_cells_only_cts_demand = df_cts_cell_without_amenities.loc[
752
        ~df_cts_cell_without_amenities["zensus_population_id"].isin(
753
            df_buildings_without_amenities["zensus_population_id"].unique()
754
        )
755
    ]
756
757
    df_cells_only_cts_demand.reset_index(drop=True, inplace=True)
758
759
    return df_cells_only_cts_demand
760
761
762
def calc_census_cell_share(scenario, sector):
763
    """
764
    The profile share for each census cell is calculated by it's
765
    share of annual demand per substation bus. The annual demand
766
    per cell is defined by DemandRegio/Peta5. The share is for both
767
    scenarios identical as the annual demand is linearly scaled.
768
769
    Parameters
770
    ----------
771
    scenario: str
772
        Scenario for which the share is calculated: "eGon2035" or "eGon100RE"
773
    sector: str
774
        Scenario for which the share is calculated: "electricity" or "heat"
775
776
    Returns
777
    -------
778
    df_census_share: pd.DataFrame
779
    """
780
    if sector == "electricity":
781
        with db.session_scope() as session:
782
            cells_query = (
783
                session.query(
784
                    EgonDemandRegioZensusElectricity,
785
                    MapZensusGridDistricts.bus_id,
786
                )
787
                .filter(EgonDemandRegioZensusElectricity.sector == "service")
788
                .filter(EgonDemandRegioZensusElectricity.scenario == scenario)
789
                .filter(
790
                    EgonDemandRegioZensusElectricity.zensus_population_id
791
                    == MapZensusGridDistricts.zensus_population_id
792
                )
793
            )
794
795
    elif sector == "heat":
796
        with db.session_scope() as session:
797
            cells_query = (
798
                session.query(EgonPetaHeat, MapZensusGridDistricts.bus_id)
799
                .filter(EgonPetaHeat.sector == "service")
800
                .filter(EgonPetaHeat.scenario == scenario)
801
                .filter(
802
                    EgonPetaHeat.zensus_population_id
803
                    == MapZensusGridDistricts.zensus_population_id
804
                )
805
            )
806
807
    df_demand = pd.read_sql(
808
        cells_query.statement,
0 ignored issues
show
introduced by
The variable cells_query does not seem to be defined for all execution paths.
Loading history...
809
        cells_query.session.bind,
810
        index_col="zensus_population_id",
811
    )
812
813
    # get demand share of cell per bus
814
    df_census_share = df_demand["demand"] / df_demand.groupby("bus_id")[
815
        "demand"
816
    ].transform("sum")
817
    df_census_share = df_census_share.rename("cell_share")
818
819
    df_census_share = pd.concat(
820
        [
821
            df_census_share,
822
            df_demand[["bus_id", "scenario"]],
823
        ],
824
        axis=1,
825
    )
826
827
    df_census_share.reset_index(inplace=True)
828
    return df_census_share
829
830
831
def calc_building_demand_profile_share(
832
    df_cts_buildings, scenario="eGon2035", sector="electricity"
833
):
834
    """
835
    Share of cts electricity demand profile per bus for every selected building
836
    is calculated. Building-amenity share is multiplied with census cell share
837
    to get the substation bus profile share for each building. The share is
838
    grouped and aggregated per building as some buildings exceed the shape of
839
    census cells and have amenities assigned from multiple cells. Building
840
    therefore get the amenity share of all census cells.
841
842
    Parameters
843
    ----------
844
    df_cts_buildings: gpd.GeoDataFrame
845
        Table of all buildings with cts demand assigned
846
    scenario: str
847
        Scenario for which the share is calculated.
848
    sector: str
849
        Sector for which the share is calculated.
850
851
    Returns
852
    -------
853
    df_building_share: pd.DataFrame
854
        Table of bus profile share per building
855
856
    """
857
858
    def calc_building_amenity_share(df_cts_buildings):
859
        """
860
        Calculate the building share by the number amenities per building
861
        within a census cell. Building ids can exist multiple time but with
862
        different zensus_population_ids.
863
        """
864
        df_building_amenity_share = df_cts_buildings[
865
            "n_amenities_inside"
866
        ] / df_cts_buildings.groupby("zensus_population_id")[
867
            "n_amenities_inside"
868
        ].transform(
869
            "sum"
870
        )
871
        df_building_amenity_share = pd.concat(
872
            [
873
                df_building_amenity_share.rename("building_amenity_share"),
874
                df_cts_buildings[["zensus_population_id", "id"]],
875
            ],
876
            axis=1,
877
        )
878
        return df_building_amenity_share
879
880
    df_building_amenity_share = calc_building_amenity_share(df_cts_buildings)
881
882
    df_census_cell_share = calc_census_cell_share(
883
        scenario=scenario, sector=sector
884
    )
885
886
    df_demand_share = pd.merge(
887
        left=df_building_amenity_share,
888
        right=df_census_cell_share,
889
        left_on="zensus_population_id",
890
        right_on="zensus_population_id",
891
    )
892
    df_demand_share["profile_share"] = df_demand_share[
893
        "building_amenity_share"
894
    ].multiply(df_demand_share["cell_share"])
895
896
    # TODO bus_id fix
897
    df_demand_share = df_demand_share[
898
        ["id", "bus_id", "scenario", "profile_share"]
899
    ]
900
901
    # Group and aggregate per building for multi cell buildings
902
    df_demand_share = (
903
        df_demand_share.groupby(["scenario", "id", "bus_id"])
904
        .sum()
905
        .reset_index()
906
    )
907
    if df_demand_share.duplicated("id", keep=False).any():
908
        print(
909
            df_demand_share.loc[df_demand_share.duplicated("id", keep=False)]
910
        )
911
    return df_demand_share
912
913
914
def calc_cts_building_profiles(
915
    egon_building_ids,
916
    bus_ids,
917
    scenario,
918
    sector,
919
):
920
    """
921
    Calculate the cts demand profile for each building. The profile is
922
    calculated by the demand share of the building per substation bus.
923
924
    Parameters
925
    ----------
926
    egon_building_ids: list of int
927
        Ids of the building for which the profile is calculated.
928
    bus_ids: list of int
929
        Ids of the substation for which selected building profiles are
930
        calculated.
931
    scenario: str
932
        Scenario for which the share is calculated: "eGon2035" or "eGon100RE"
933
    sector: str
934
        Sector for which the share is calculated: "electricity" or "heat"
935
936
    Returns
937
    -------
938
    df_building_profiles: pd.DataFrame
939
        Table of demand profile per building
940
    """
941
    if sector == "electricity":
942
        # Get cts building electricity demand share of selected buildings
943
        with db.session_scope() as session:
944
            cells_query = (
945
                session.query(
946
                    EgonCtsElectricityDemandBuildingShare,
947
                )
948
                .filter(
949
                    EgonCtsElectricityDemandBuildingShare.scenario == scenario
950
                )
951
                .filter(
952
                    EgonCtsElectricityDemandBuildingShare.building_id.in_(
953
                        egon_building_ids
954
                    )
955
                )
956
            )
957
958
        df_demand_share = pd.read_sql(
959
            cells_query.statement, cells_query.session.bind, index_col=None
960
        )
961
962
        # Get substation cts electricity load profiles of selected bus_ids
963
        with db.session_scope() as session:
964
            cells_query = (
965
                session.query(EgonEtragoElectricityCts).filter(
966
                    EgonEtragoElectricityCts.scn_name == scenario
967
                )
968
            ).filter(EgonEtragoElectricityCts.bus_id.in_(bus_ids))
969
970
        df_cts_profiles = pd.read_sql(
971
            cells_query.statement,
972
            cells_query.session.bind,
973
        )
974
        df_cts_profiles = pd.DataFrame.from_dict(
975
            df_cts_profiles.set_index("bus_id")["p_set"].to_dict(),
976
            orient="index",
977
        )
978
        # df_cts_profiles = calc_load_curves_cts(scenario)
979
980
    elif sector == "heat":
981
        # Get cts building heat demand share of selected buildings
982
        with db.session_scope() as session:
983
            cells_query = (
984
                session.query(
985
                    EgonCtsHeatDemandBuildingShare,
986
                )
987
                .filter(EgonCtsHeatDemandBuildingShare.scenario == scenario)
988
                .filter(
989
                    EgonCtsHeatDemandBuildingShare.building_id.in_(
990
                        egon_building_ids
991
                    )
992
                )
993
            )
994
995
        df_demand_share = pd.read_sql(
996
            cells_query.statement, cells_query.session.bind, index_col=None
997
        )
998
999
        # Get substation cts heat load profiles of selected bus_ids
1000
        with db.session_scope() as session:
1001
            cells_query = (
1002
                session.query(EgonEtragoHeatCts).filter(
1003
                    EgonEtragoHeatCts.scn_name == scenario
1004
                )
1005
            ).filter(EgonEtragoHeatCts.bus_id.in_(bus_ids))
1006
1007
        df_cts_profiles = pd.read_sql(
1008
            cells_query.statement,
1009
            cells_query.session.bind,
1010
        )
1011
        df_cts_profiles = pd.DataFrame.from_dict(
1012
            df_cts_profiles.set_index("bus_id")["p_set"].to_dict(),
1013
            orient="index",
1014
        )
1015
1016
    # TODO remove later
1017
    df_demand_share.rename(columns={"id": "building_id"}, inplace=True)
0 ignored issues
show
introduced by
The variable df_demand_share does not seem to be defined for all execution paths.
Loading history...
1018
1019
    # get demand profile for all buildings for selected demand share
1020
    df_building_profiles = pd.DataFrame()
1021
    for bus_id, df in df_demand_share.groupby("bus_id"):
1022
        shares = df.set_index("building_id", drop=True)["profile_share"]
1023
        profile_ts = df_cts_profiles.loc[bus_id]
0 ignored issues
show
introduced by
The variable df_cts_profiles does not seem to be defined for all execution paths.
Loading history...
1024
        building_profiles = np.outer(profile_ts, shares)
1025
        building_profiles = pd.DataFrame(
1026
            building_profiles, index=profile_ts.index, columns=shares.index
1027
        )
1028
        df_building_profiles = pd.concat(
1029
            [df_building_profiles, building_profiles], axis=1
1030
        )
1031
1032
    return df_building_profiles
1033
1034
1035
def delete_synthetic_cts_buildings():
1036
    """
1037
    All synthetic cts buildings are deleted from the DB. This is necessary if
1038
    the task is run multiple times as the existing synthetic buildings
1039
    influence the results.
1040
    """
1041
    # import db tables
1042
    from saio.openstreetmap import osm_buildings_synthetic
1043
1044
    # cells mit amenities
1045
    with db.session_scope() as session:
1046
        session.query(osm_buildings_synthetic).filter(
1047
            osm_buildings_synthetic.building == "cts"
1048
        ).delete()
1049
1050
1051
def remove_double_bus_id(df_cts_buildings):
1052
    """This is an backup adhoc fix if there should still be a building which
1053
    is assigned to 2 substations. In this case one of the buildings is just
1054
    dropped. As this currently accounts for only one building with one amenity
1055
    the deviation is neglectable."""
1056
    # assign bus_id via census cell of amenity
1057
    with db.session_scope() as session:
1058
        cells_query = session.query(
1059
            MapZensusGridDistricts.zensus_population_id,
1060
            MapZensusGridDistricts.bus_id,
1061
        )
1062
1063
    df_egon_map_zensus_buildings_buses = pd.read_sql(
1064
        cells_query.statement,
1065
        cells_query.session.bind,
1066
        index_col=None,
1067
    )
1068
    df_cts_buildings = pd.merge(
1069
        left=df_cts_buildings,
1070
        right=df_egon_map_zensus_buildings_buses,
1071
        on="zensus_population_id",
1072
    )
1073
1074
    substation_per_building = df_cts_buildings.groupby("id")[
1075
        "bus_id"
1076
    ].nunique()
1077
    building_id = substation_per_building.loc[
1078
        substation_per_building > 1
1079
    ].index
1080
    df_duplicates = df_cts_buildings.loc[
1081
        df_cts_buildings["id"].isin(building_id)
1082
    ]
1083
    for unique_id in df_duplicates["id"].unique():
1084
        drop_index = df_duplicates[df_duplicates["id"] == unique_id].index[0]
1085
        print(
1086
            f"Buildings {df_cts_buildings.loc[drop_index, 'id']}"
1087
            f" dropped because of double substation"
1088
        )
1089
        df_cts_buildings.drop(index=drop_index, inplace=True)
1090
1091
    df_cts_buildings.drop(columns="bus_id", inplace=True)
1092
1093
    return df_cts_buildings
1094
1095
1096
def cts_buildings():
1097
    """
1098
    Assigns CTS demand to buildings and calculates the respective demand
1099
    profiles. The demand profile per substation are disaggregated per
1100
    annual demand share of each census cell and by the number of amenities
1101
    per building within the cell. If no building data is available,
1102
    synthetic buildings are generated around the amenities. If no amenities
1103
    but cts demand is available, buildings are randomly selected. If no
1104
    building nor amenity is available, random synthetic buildings are
1105
    generated. The demand share is stored in the database.
1106
1107
    Note:
1108
    -----
1109
    Cells with CTS demand, amenities and buildings do not change within
1110
    the scenarios, only the demand itself. Therefore scenario eGon2035
1111
    can be used universally to determine the cts buildings but not for
1112
    he demand share.
1113
    """
1114
1115
    log.info("Start logging!")
1116
    # Buildings with amenities
1117
    df_buildings_with_amenities, df_lost_cells = buildings_with_amenities()
1118
    log.info("Buildings with amenities selected!")
1119
1120
    # Median number of amenities per cell
1121
    median_n_amenities = int(
1122
        df_buildings_with_amenities.groupby("zensus_population_id")[
1123
            "n_amenities_inside"
1124
        ]
1125
        .sum()
1126
        .median()
1127
    )
1128
    log.info(f"Median amenity value: {median_n_amenities}")
1129
1130
    # Remove synthetic CTS buildings if existing
1131
    delete_synthetic_cts_buildings()
1132
    log.info("Old synthetic cts buildings deleted!")
1133
1134
    # Amenities not assigned to buildings
1135
    df_amenities_without_buildings = amenities_without_buildings()
1136
    log.info("Amenities without buildlings selected!")
1137
1138
    # Append lost cells due to duplicated ids, to cover all demand cells
1139
    if not df_lost_cells.empty:
1140
1141
        df_lost_cells["amenities"] = median_n_amenities
1142
        # create row for every amenity
1143
        df_lost_cells["amenities"] = (
1144
            df_lost_cells["amenities"].astype(int).apply(range)
1145
        )
1146
        df_lost_cells = df_lost_cells.explode("amenities")
1147
        df_lost_cells.drop(columns="amenities", inplace=True)
1148
        df_amenities_without_buildings = df_amenities_without_buildings.append(
1149
            df_lost_cells, ignore_index=True
1150
        )
1151
        log.info("Lost cells due to substation intersection appended!")
1152
1153
    # One building per amenity
1154
    df_amenities_without_buildings["n_amenities_inside"] = 1
1155
    # Create synthetic buildings for amenites without buildings
1156
    df_synthetic_buildings_with_amenities = create_synthetic_buildings(
1157
        df_amenities_without_buildings, points="geom_amenity"
1158
    )
1159
    log.info("Synthetic buildings created!")
1160
1161
    # TODO write to DB and remove renaming
1162
    write_table_to_postgis(
1163
        df_synthetic_buildings_with_amenities.rename(
1164
            columns={
1165
                "zensus_population_id": "cell_id",
1166
                "egon_building_id": "id",
1167
            }
1168
        ),
1169
        OsmBuildingsSynthetic,
1170
        engine=engine,
1171
        drop=False,
1172
    )
1173
    log.info("Synthetic buildings exported to DB!")
1174
1175
    # Cells without amenities but CTS demand and buildings
1176
    df_buildings_without_amenities = buildings_without_amenities()
1177
    log.info("Buildings without amenities in demand cells identified!")
1178
1179
    # Backup Bugfix for duplicated buildings which occure in SQL-Querry
1180
    # drop building ids which have already been used
1181
    mask = df_buildings_without_amenities.loc[
1182
        df_buildings_without_amenities["id"].isin(
1183
            df_buildings_with_amenities["id"]
1184
        )
1185
    ].index
1186
    df_buildings_without_amenities = df_buildings_without_amenities.drop(
1187
        index=mask
1188
    ).reset_index(drop=True)
1189
    log.info(f"{len(mask)} duplicated ids removed!")
1190
1191
    # select median n buildings per cell
1192
    df_buildings_without_amenities = select_cts_buildings(
1193
        df_buildings_without_amenities, max_n=median_n_amenities
1194
    )
1195
    df_buildings_without_amenities["n_amenities_inside"] = 1
1196
    log.info(f"{median_n_amenities} buildings per cell selected!")
1197
1198
    # Create synthetic amenities and buildings in cells with only CTS demand
1199
    df_cells_with_cts_demand_only = cells_with_cts_demand_only(
1200
        df_buildings_without_amenities
1201
    )
1202
    log.info("Cells with only demand identified!")
1203
1204
    # Median n Amenities per cell
1205
    df_cells_with_cts_demand_only["amenities"] = median_n_amenities
1206
    # create row for every amenity
1207
    df_cells_with_cts_demand_only["amenities"] = (
1208
        df_cells_with_cts_demand_only["amenities"].astype(int).apply(range)
1209
    )
1210
    df_cells_with_cts_demand_only = df_cells_with_cts_demand_only.explode(
1211
        "amenities"
1212
    )
1213
    df_cells_with_cts_demand_only.drop(columns="amenities", inplace=True)
1214
1215
    # Only 1 Amenity per Building
1216
    df_cells_with_cts_demand_only["n_amenities_inside"] = 1
1217
    df_cells_with_cts_demand_only = place_buildings_with_amenities(
1218
        df_cells_with_cts_demand_only, amenities=1
1219
    )
1220
    df_synthetic_buildings_without_amenities = create_synthetic_buildings(
1221
        df_cells_with_cts_demand_only, points="geom_point"
1222
    )
1223
    log.info(f"{median_n_amenities} synthetic buildings per cell created")
1224
1225
    # TODO remove (backup) renaming after #871
1226
    write_table_to_postgis(
1227
        df_synthetic_buildings_without_amenities.rename(
1228
            columns={
1229
                "zensus_population_id": "cell_id",
1230
                "egon_building_id": "id",
1231
            }
1232
        ),
1233
        OsmBuildingsSynthetic,
1234
        engine=engine,
1235
        drop=False,
1236
    )
1237
    log.info("Synthetic buildings exported to DB")
1238
1239
    # Concat all buildings
1240
    columns = [
1241
        "zensus_population_id",
1242
        "id",
1243
        "geom_building",
1244
        "n_amenities_inside",
1245
        "source",
1246
    ]
1247
1248
    df_buildings_with_amenities["source"] = "bwa"
1249
    df_synthetic_buildings_with_amenities["source"] = "sbwa"
1250
    df_buildings_without_amenities["source"] = "bwoa"
1251
    df_synthetic_buildings_without_amenities["source"] = "sbwoa"
1252
1253
    df_cts_buildings = pd.concat(
1254
        [
1255
            df_buildings_with_amenities[columns],
1256
            df_synthetic_buildings_with_amenities[columns],
1257
            df_buildings_without_amenities[columns],
1258
            df_synthetic_buildings_without_amenities[columns],
1259
        ],
1260
        axis=0,
1261
        ignore_index=True,
1262
    )
1263
    df_cts_buildings = remove_double_bus_id(df_cts_buildings)
1264
    log.info("Double bus_id checked")
1265
1266
    # TODO maybe remove after #772
1267
    df_cts_buildings["id"] = df_cts_buildings["id"].astype(int)
1268
1269
    # Write table to db for debugging
1270
    # TODO remove later? Check if cts-builings are querried in other functions
1271
    df_cts_buildings = gpd.GeoDataFrame(
1272
        df_cts_buildings, geometry="geom_building", crs=3035
1273
    )
1274
    df_cts_buildings = df_cts_buildings.reset_index().rename(
1275
        columns={"index": "serial"}
1276
    )
1277
    write_table_to_postgis(
1278
        df_cts_buildings,
1279
        CtsBuildings,
1280
        engine=engine,
1281
        drop=True,
1282
    )
1283
    log.info("CTS buildings exported to DB!")
1284
1285
1286 View Code Duplication
def cts_electricity():
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1287
    """
1288
    Calculate cts electricity demand share of hvmv substation profile
1289
     for buildings.
1290
    """
1291
    log.info("Start logging!")
1292
    with db.session_scope() as session:
1293
        cells_query = session.query(CtsBuildings)
1294
1295
    df_cts_buildings = pd.read_sql(
1296
        cells_query.statement, cells_query.session.bind, index_col=None
1297
    )
1298
    log.info("CTS buildings from DB imported!")
1299
    df_demand_share_2035 = calc_building_demand_profile_share(
1300
        df_cts_buildings, scenario="eGon2035", sector="electricity"
1301
    )
1302
    log.info("Profile share for egon2035 calculated!")
1303
1304
    df_demand_share_100RE = calc_building_demand_profile_share(
1305
        df_cts_buildings, scenario="eGon100RE", sector="electricity"
1306
    )
1307
    log.info("Profile share for egon100RE calculated!")
1308
1309
    df_demand_share = pd.concat(
1310
        [df_demand_share_2035, df_demand_share_100RE],
1311
        axis=0,
1312
        ignore_index=True,
1313
    )
1314
    df_demand_share.rename(columns={"id": "building_id"}, inplace=True)
1315
1316
    write_table_to_postgres(
1317
        df_demand_share,
1318
        EgonCtsElectricityDemandBuildingShare,
1319
        engine=engine,
1320
        drop=True,
1321
    )
1322
    log.info("Profile share exported to DB!")
1323
1324
1325 View Code Duplication
def cts_heat():
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1326
    """
1327
    Calculate cts electricity demand share of hvmv substation profile
1328
     for buildings.
1329
    """
1330
    log.info("Start logging!")
1331
    with db.session_scope() as session:
1332
        cells_query = session.query(CtsBuildings)
1333
1334
    df_cts_buildings = pd.read_sql(
1335
        cells_query.statement, cells_query.session.bind, index_col=None
1336
    )
1337
    log.info("CTS buildings from DB imported!")
1338
1339
    df_demand_share_2035 = calc_building_demand_profile_share(
1340
        df_cts_buildings, scenario="eGon2035", sector="heat"
1341
    )
1342
    log.info("Profile share for egon2035 calculated!")
1343
    df_demand_share_100RE = calc_building_demand_profile_share(
1344
        df_cts_buildings, scenario="eGon100RE", sector="heat"
1345
    )
1346
    log.info("Profile share for egon100RE calculated!")
1347
    df_demand_share = pd.concat(
1348
        [df_demand_share_2035, df_demand_share_100RE],
1349
        axis=0,
1350
        ignore_index=True,
1351
    )
1352
1353
    df_demand_share.rename(columns={"id": "building_id"}, inplace=True)
1354
1355
    write_table_to_postgres(
1356
        df_demand_share,
1357
        EgonCtsHeatDemandBuildingShare,
1358
        engine=engine,
1359
        drop=True,
1360
    )
1361
    log.info("Profile share exported to DB!")
1362
1363
1364 View Code Duplication
def get_cts_electricity_peak_load():
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1365
    """
1366
    Get electricity peak load of all CTS buildings for both scenarios and
1367
    store in DB.
1368
    """
1369
    log.info("Start logging!")
1370
1371
    BuildingElectricityPeakLoads.__table__.create(bind=engine, checkfirst=True)
1372
1373
    # Delete rows with cts demand
1374
    with db.session_scope() as session:
1375
        session.query(BuildingElectricityPeakLoads).filter(
1376
            BuildingElectricityPeakLoads.sector == "cts"
1377
        ).delete()
1378
    log.info("Cts electricity peak load removed from DB!")
1379
1380
    for scenario in ["eGon2035", "eGon100RE"]:
1381
1382
        with db.session_scope() as session:
1383
            cells_query = session.query(
1384
                EgonCtsElectricityDemandBuildingShare
1385
            ).filter(
1386
                EgonCtsElectricityDemandBuildingShare.scenario == scenario
1387
            )
1388
1389
        df_demand_share = pd.read_sql(
1390
            cells_query.statement, cells_query.session.bind, index_col=None
1391
        )
1392
1393
        with db.session_scope() as session:
1394
            cells_query = session.query(EgonEtragoElectricityCts).filter(
1395
                EgonEtragoElectricityCts.scn_name == scenario
1396
            )
1397
1398
        df_cts_profiles = pd.read_sql(
1399
            cells_query.statement,
1400
            cells_query.session.bind,
1401
        )
1402
        df_cts_profiles = pd.DataFrame.from_dict(
1403
            df_cts_profiles.set_index("bus_id")["p_set"].to_dict(),
1404
            orient="columns",
1405
        )
1406
1407
        df_peak_load = pd.merge(
1408
            left=df_cts_profiles.max().astype(float).rename("max"),
1409
            right=df_demand_share,
1410
            left_index=True,
1411
            right_on="bus_id",
1412
        )
1413
1414
        # Convert unit from MWh to W
1415
        df_peak_load["max"] = df_peak_load["max"] * 1e6
1416
        df_peak_load["peak_load_in_w"] = (
1417
            df_peak_load["max"] * df_peak_load["profile_share"]
1418
        )
1419
        log.info(f"Peak load for {scenario} determined!")
1420
1421
        # TODO remove later
1422
        df_peak_load.rename(columns={"id": "building_id"}, inplace=True)
1423
        df_peak_load["sector"] = "cts"
1424
1425
        # # Write peak loads into db
1426
        write_table_to_postgres(
1427
            df_peak_load,
1428
            BuildingElectricityPeakLoads,
1429
            engine=engine,
1430
            drop=False,
1431
            index=False,
1432
            if_exists="append",
1433
        )
1434
1435
        log.info(f"Peak load for {scenario} exported to DB!")
1436
1437
1438 View Code Duplication
def get_cts_heat_peak_load():
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1439
    """
1440
    Get heat peak load of all CTS buildings for both scenarios and store in DB.
1441
    """
1442
    log.info("Start logging!")
1443
1444
    BuildingHeatPeakLoads.__table__.create(bind=engine, checkfirst=True)
1445
1446
    # Delete rows with cts demand
1447
    with db.session_scope() as session:
1448
        session.query(BuildingHeatPeakLoads).filter(
1449
            BuildingHeatPeakLoads.sector == "cts"
1450
        ).delete()
1451
    log.info("Cts heat peak load removed from DB!")
1452
1453
    for scenario in ["eGon2035", "eGon100RE"]:
1454
1455
        with db.session_scope() as session:
1456
            cells_query = session.query(
1457
                EgonCtsElectricityDemandBuildingShare
1458
            ).filter(
1459
                EgonCtsElectricityDemandBuildingShare.scenario == scenario
1460
            )
1461
1462
        df_demand_share = pd.read_sql(
1463
            cells_query.statement, cells_query.session.bind, index_col=None
1464
        )
1465
        log.info(f"Retrieved demand share for scenario: {scenario}")
1466
1467
        with db.session_scope() as session:
1468
            cells_query = session.query(EgonEtragoHeatCts).filter(
1469
                EgonEtragoHeatCts.scn_name == scenario
1470
            )
1471
1472
        df_cts_profiles = pd.read_sql(
1473
            cells_query.statement,
1474
            cells_query.session.bind,
1475
        )
1476
        log.info(f"Retrieved substation profiles for scenario: {scenario}")
1477
1478
        df_cts_profiles = pd.DataFrame.from_dict(
1479
            df_cts_profiles.set_index("bus_id")["p_set"].to_dict(),
1480
            orient="columns",
1481
        )
1482
1483
        df_peak_load = pd.merge(
1484
            left=df_cts_profiles.max().astype(float).rename("max"),
1485
            right=df_demand_share,
1486
            left_index=True,
1487
            right_on="bus_id",
1488
        )
1489
1490
        # Convert unit from MWh to W
1491
        df_peak_load["max"] = df_peak_load["max"] * 1e6
1492
        df_peak_load["peak_load_in_w"] = (
1493
            df_peak_load["max"] * df_peak_load["profile_share"]
1494
        )
1495
        log.info(f"Peak load for {scenario} determined!")
1496
1497
        # TODO remove later
1498
        df_peak_load.rename(columns={"id": "building_id"}, inplace=True)
1499
        df_peak_load["sector"] = "cts"
1500
1501
        # # Write peak loads into db
1502
        write_table_to_postgres(
1503
            df_peak_load,
1504
            BuildingHeatPeakLoads,
1505
            engine=engine,
1506
            drop=False,
1507
            index=False,
1508
            if_exists="append",
1509
        )
1510
1511
        log.info(f"Peak load for {scenario} exported to DB!")
1512