Passed
Pull Request — dev (#1255)
by
unknown
02:57
created

data.datasets.pypsaeur   F

Complexity

Total Complexity 127

Size/Duplication

Total Lines 2303
Duplicated Lines 2.91 %

Importance

Changes 0
Metric Value
wmc 127
eloc 1418
dl 67
loc 2303
rs 0.8
c 0
b 0
f 0

2 Methods

Rating   Name   Duplication   Size   Complexity  
A PreparePypsaEur.__init__() 0 8 1
A RunPypsaEur.__init__() 0 11 1

27 Functions

Rating   Name   Duplication   Size   Complexity  
A solve_network() 0 38 2
A prepare_network_2() 0 27 2
F download() 0 188 12
A coal_exit_D() 0 13 1
A h2_overground_stores() 0 43 1
B update_electrical_timeseries_germany() 0 89 3
A prepare_network() 0 45 4
A overwrite_H2_pipeline_share() 0 43 1
A drop_fossil_gas() 0 7 1
A drop_biomass() 0 6 2
A prepared_network() 34 34 3
A rual_heat_technologies() 0 16 1
D execute() 0 132 10
A drop_conventional_power_plants() 0 12 1
D combine_decentral_and_rural_heat() 0 74 12
A read_network() 33 33 3
A additional_grid_expansion_2045() 0 5 1
A drop_urban_decentral_heat() 0 44 4
A clean_database() 0 83 3
A offwind_potential_D() 0 30 1
F neighbor_reduction() 0 1076 47
A electrical_neighbours_egon100() 0 7 2
A drop_new_gas_pipelines() 0 9 1
A update_heat_timeseries_germany() 0 18 1
A district_heating_shares() 0 41 2
A geothermal_district_heating() 0 40 2
A postprocessing_biomass_2045() 0 25 2

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like data.datasets.pypsaeur often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
"""The central module containing all code dealing with importing data from
2
the pysa-eur-sec scenario parameter creation
3
"""
4
5
from pathlib import Path
6
from urllib.request import urlretrieve
7
import json
8
import shutil
9
10
from shapely.geometry import LineString
11
import geopandas as gpd
12
import importlib_resources as resources
13
import numpy as np
14
import pandas as pd
15
import pypsa
16
import requests
17
import yaml
18
19
from egon.data import __path__, config, db, logger
20
from egon.data.datasets import Dataset
21
from egon.data.datasets.scenario_parameters import get_sector_parameters
22
from egon.data.datasets.scenario_parameters.parameters import (
23
    annualize_capital_costs,
24
)
25
import egon.data.config
26
import egon.data.subprocess as subproc
27
28
29
class PreparePypsaEur(Dataset):
30
    def __init__(self, dependencies):
31
        super().__init__(
32
            name="PreparePypsaEur",
33
            version="0.0.42",
34
            dependencies=dependencies,
35
            tasks=(
36
                download,
37
                prepare_network,
38
            ),
39
        )
40
41
42
class RunPypsaEur(Dataset):
43
    def __init__(self, dependencies):
44
        super().__init__(
45
            name="SolvePypsaEur",
46
            version="0.0.41",
47
            dependencies=dependencies,
48
            tasks=(
49
                prepare_network_2,
50
                execute,
51
                solve_network,
52
                clean_database,
53
                electrical_neighbours_egon100,
54
                # Dropped until we decided how we deal with the H2 grid
55
                # overwrite_H2_pipeline_share,
56
            ),
57
        )
58
59
60
def download():
61
    cwd = Path(".")
62
    filepath = cwd / "run-pypsa-eur"
63
    filepath.mkdir(parents=True, exist_ok=True)
64
65
    pypsa_eur_repos = filepath / "pypsa-eur"
66
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
67
        if not pypsa_eur_repos.exists():
68
            subproc.run(
69
                [
70
                    "git",
71
                    "clone",
72
                    "--branch",
73
                    "master",
74
                    "https://github.com/PyPSA/pypsa-eur.git",
75
                    pypsa_eur_repos,
76
                ]
77
            )
78
79
            subproc.run(
80
                [
81
                    "git",
82
                    "checkout",
83
                    "2119f4cee05c256509f48d4e9fe0d8fd9e9e3632",
84
                ],
85
                cwd=pypsa_eur_repos,
86
            )
87
88
            # Add gurobi solver to environment:
89
            # Read YAML file
90
            # path_to_env = pypsa_eur_repos / "envs" / "environment.yaml"
91
            # with open(path_to_env, "r") as stream:
92
            #    env = yaml.safe_load(stream)
93
94
            # The version of gurobipy has to fit to the version of gurobi.
95
            # Since we mainly use gurobi 10.0 this is set here.
96
            # env["dependencies"][-1]["pip"].append("gurobipy==10.0.0")
97
98
            # Set python version to <3.12
99
            # Python<=3.12 needs gurobipy>=11.0, in case gurobipy is updated,
100
            # this can be removed
101
            # env["dependencies"] = [
102
            #    "python>=3.8,<3.12" if x == "python>=3.8" else x
103
            #    for x in env["dependencies"]
104
            # ]
105
106
            # Limit geopandas version
107
            # our pypsa-eur version is not compatible to geopandas>1
108
            # env["dependencies"] = [
109
            #    "geopandas>=0.11.0,<1" if x == "geopandas>=0.11.0" else x
110
            #    for x in env["dependencies"]
111
            # ]
112
113
            # Write YAML file
114
            # with open(path_to_env, "w", encoding="utf8") as outfile:
115
            #    yaml.dump(
116
            #        env, outfile, default_flow_style=False, allow_unicode=True
117
            #    )
118
119
            # Copy config file for egon-data to pypsa-eur directory
120
            shutil.copy(
121
                Path(
122
                    __path__[0], "datasets", "pypsaeur", "config_prepare.yaml"
123
                ),
124
                pypsa_eur_repos / "config" / "config.yaml",
125
            )
126
127
            # Copy custom_extra_functionality.py file for egon-data to pypsa-eur directory
128
            shutil.copy(
129
                Path(
130
                    __path__[0],
131
                    "datasets",
132
                    "pypsaeur",
133
                    "custom_extra_functionality.py",
134
                ),
135
                pypsa_eur_repos / "data",
136
            )
137
138
            with open(filepath / "Snakefile", "w") as snakefile:
139
                snakefile.write(
140
                    resources.read_text(
141
                        "egon.data.datasets.pypsaeur", "Snakefile"
142
                    )
143
                )
144
145
        # Copy era5 weather data to folder for pypsaeur
146
        era5_pypsaeur_path = filepath / "pypsa-eur" / "cutouts"
147
148
        if not era5_pypsaeur_path.exists():
149
            era5_pypsaeur_path.mkdir(parents=True, exist_ok=True)
150
            copy_from = config.datasets()["era5_weather_data"]["targets"][
151
                "weather_data"
152
            ]["path"]
153
            filename = "europe-2011-era5.nc"
154
            shutil.copy(
155
                copy_from + "/" + filename, era5_pypsaeur_path / filename
156
            )
157
158
        # Workaround to download natura, shipdensity and globalenergymonitor
159
        # data, which is not working in the regular snakemake workflow.
160
        # The same files are downloaded from the same directory as in pypsa-eur
161
        # version 0.10 here. Is is stored in the folders from pypsa-eur.
162
        if not (filepath / "pypsa-eur" / "resources").exists():
163
            (filepath / "pypsa-eur" / "resources").mkdir(
164
                parents=True, exist_ok=True
165
            )
166
        urlretrieve(
167
            "https://zenodo.org/record/4706686/files/natura.tiff",
168
            filepath / "pypsa-eur" / "resources" / "natura.tiff",
169
        )
170
171
        if not (filepath / "pypsa-eur" / "data").exists():
172
            (filepath / "pypsa-eur" / "data").mkdir(
173
                parents=True, exist_ok=True
174
            )
175
        urlretrieve(
176
            "https://zenodo.org/record/13757228/files/shipdensity_global.zip",
177
            filepath / "pypsa-eur" / "data" / "shipdensity_global.zip",
178
        )
179
180
        if not (
181
            filepath
182
            / "pypsa-eur"
183
            / "zenodo.org"
184
            / "records"
185
            / "13757228"
186
            / "files"
187
        ).exists():
188
            (
189
                filepath
190
                / "pypsa-eur"
191
                / "zenodo.org"
192
                / "records"
193
                / "13757228"
194
                / "files"
195
            ).mkdir(parents=True, exist_ok=True)
196
197
        urlretrieve(
198
            "https://zenodo.org/records/10356004/files/ENSPRESO_BIOMASS.xlsx",
199
            filepath
200
            / "pypsa-eur"
201
            / "zenodo.org"
202
            / "records"
203
            / "13757228"
204
            / "files"
205
            / "ENSPRESO_BIOMASS.xlsx",
206
        )
207
208
        if not (filepath / "pypsa-eur" / "data" / "gem").exists():
209
            (filepath / "pypsa-eur" / "data" / "gem").mkdir(
210
                parents=True, exist_ok=True
211
            )
212
213
        r = requests.get(
214
            "https://tubcloud.tu-berlin.de/s/LMBJQCsN6Ez5cN2/download/"
215
            "Europe-Gas-Tracker-2024-05.xlsx"
216
        )
217
        with open(
218
            filepath
219
            / "pypsa-eur"
220
            / "data"
221
            / "gem"
222
            / "Europe-Gas-Tracker-2024-05.xlsx",
223
            "wb",
224
        ) as outfile:
225
            outfile.write(r.content)
226
227
        if not (filepath / "pypsa-eur" / "data" / "gem").exists():
228
            (filepath / "pypsa-eur" / "data" / "gem").mkdir(
229
                parents=True, exist_ok=True
230
            )
231
232
        r = requests.get(
233
            "https://tubcloud.tu-berlin.de/s/Aqebo3rrQZWKGsG/download/"
234
            "Global-Steel-Plant-Tracker-April-2024-Standard-Copy-V1.xlsx"
235
        )
236
        with open(
237
            filepath
238
            / "pypsa-eur"
239
            / "data"
240
            / "gem"
241
            / "Global-Steel-Plant-Tracker-April-2024-Standard-Copy-V1.xlsx",
242
            "wb",
243
        ) as outfile:
244
            outfile.write(r.content)
245
246
    else:
247
        print("Pypsa-eur is not executed due to the settings of egon-data")
248
249
250
def prepare_network():
251
    cwd = Path(".")
252
    filepath = cwd / "run-pypsa-eur"
253
254
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
255
        subproc.run(
256
            [
257
                "snakemake",
258
                "-j1",
259
                "--directory",
260
                filepath,
261
                "--snakefile",
262
                filepath / "Snakefile",
263
                "--use-conda",
264
                "--conda-frontend=conda",
265
                "--cores",
266
                "8",
267
                "prepare",
268
            ]
269
        )
270
        execute()
271
272
        path = filepath / "pypsa-eur" / "results" / "prenetworks"
273
274
        path_2 = path / "prenetwork_post-manipulate_pre-solve"
275
        path_2.mkdir(parents=True, exist_ok=True)
276
277
        with open(
278
            __path__[0] + "/datasets/pypsaeur/config_prepare.yaml", "r"
279
        ) as stream:
280
            data_config = yaml.safe_load(stream)
281
282
        for i in range(0, len(data_config["scenario"]["planning_horizons"])):
283
            nc_file = (
284
                f"base_s_{data_config['scenario']['clusters'][0]}"
285
                f"_l{data_config['scenario']['ll'][0]}"
286
                f"_{data_config['scenario']['opts'][0]}"
287
                f"_{data_config['scenario']['sector_opts'][0]}"
288
                f"_{data_config['scenario']['planning_horizons'][i]}.nc"
289
            )
290
291
            shutil.copy(Path(path, nc_file), path_2)
292
293
    else:
294
        print("Pypsa-eur is not executed due to the settings of egon-data")
295
296
297
def prepare_network_2():
298
    cwd = Path(".")
299
    filepath = cwd / "run-pypsa-eur"
300
301
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
302
        shutil.copy(
303
            Path(__path__[0], "datasets", "pypsaeur", "config_solve.yaml"),
304
            filepath / "pypsa-eur" / "config" / "config.yaml",
305
        )
306
307
        subproc.run(
308
            [
309
                "snakemake",
310
                "-j1",
311
                "--directory",
312
                filepath,
313
                "--snakefile",
314
                filepath / "Snakefile",
315
                "--use-conda",
316
                "--conda-frontend=conda",
317
                "--cores",
318
                "8",
319
                "prepare",
320
            ]
321
        )
322
    else:
323
        print("Pypsa-eur is not executed due to the settings of egon-data")
324
325
326
def solve_network():
327
    cwd = Path(".")
328
    filepath = cwd / "run-pypsa-eur"
329
330
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
331
        subproc.run(
332
            [
333
                "snakemake",
334
                "-j1",
335
                "--cores",
336
                "8",
337
                "--directory",
338
                filepath,
339
                "--snakefile",
340
                filepath / "Snakefile",
341
                "--use-conda",
342
                "--conda-frontend=conda",
343
                "solve",
344
            ]
345
        )
346
347
        postprocessing_biomass_2045()
348
349
        subproc.run(
350
            [
351
                "snakemake",
352
                "-j1",
353
                "--directory",
354
                filepath,
355
                "--snakefile",
356
                filepath / "Snakefile",
357
                "--use-conda",
358
                "--conda-frontend=conda",
359
                "summary",
360
            ]
361
        )
362
    else:
363
        print("Pypsa-eur is not executed due to the settings of egon-data")
364
365
366 View Code Duplication
def read_network(planning_horizon=3):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
367
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
368
        with open(
369
            __path__[0] + "/datasets/pypsaeur/config_solve.yaml", "r"
370
        ) as stream:
371
            data_config = yaml.safe_load(stream)
372
373
        target_file = (
374
            Path(".")
375
            / "run-pypsa-eur"
376
            / "pypsa-eur"
377
            / "results"
378
            / data_config["run"]["name"]
379
            / "postnetworks"
380
            / f"base_s_{data_config['scenario']['clusters'][0]}"
381
            f"_l{data_config['scenario']['ll'][0]}"
382
            f"_{data_config['scenario']['opts'][0]}"
383
            f"_{data_config['scenario']['sector_opts'][0]}"
384
            f"_{data_config['scenario']['planning_horizons'][planning_horizon]}.nc"
385
        )
386
387
    else:
388
        target_file = (
389
            Path(".")
390
            / "data_bundle_powerd_data"
391
            / "pypsa_eur"
392
            / "21122024_3h_clean_run"
393
            / "results"
394
            / "postnetworks"
395
            / "base_s_39_lc1.25__cb40ex0-T-H-I-B-solar+p3-dist1_2045.nc"
396
        )
397
398
    return pypsa.Network(target_file)
399
400
401
def clean_database():
402
    """Remove all components abroad for eGon100RE of the database
403
404
    Remove all components abroad and their associated time series of
405
    the datase for the scenario 'eGon100RE'.
406
407
    Parameters
408
    ----------
409
    None
410
411
    Returns
412
    -------
413
    None
414
415
    """
416
    scn_name = "eGon100RE"
417
418
    comp_one_port = ["load", "generator", "store", "storage"]
419
420
    # delete existing components and associated timeseries
421
    for comp in comp_one_port:
422
        db.execute_sql(
423
            f"""
424
            DELETE FROM {"grid.egon_etrago_" + comp + "_timeseries"}
425
            WHERE {comp + "_id"} IN (
426
                SELECT {comp + "_id"} FROM {"grid.egon_etrago_" + comp}
427
                WHERE bus IN (
428
                    SELECT bus_id FROM grid.egon_etrago_bus
429
                    WHERE country != 'DE'
430
                    AND scn_name = '{scn_name}')
431
                AND scn_name = '{scn_name}'
432
            );
433
434
            DELETE FROM {"grid.egon_etrago_" + comp}
435
            WHERE bus IN (
436
                SELECT bus_id FROM grid.egon_etrago_bus
437
                WHERE country != 'DE'
438
                AND scn_name = '{scn_name}')
439
            AND scn_name = '{scn_name}';"""
440
        )
441
442
    comp_2_ports = [
443
        "line",
444
        "link",
445
    ]
446
447
    for comp, id in zip(comp_2_ports, ["line_id", "link_id"]):
448
        db.execute_sql(
449
            f"""
450
            DELETE FROM {"grid.egon_etrago_" + comp + "_timeseries"}
451
            WHERE scn_name = '{scn_name}'
452
            AND {id} IN (
453
                SELECT {id} FROM {"grid.egon_etrago_" + comp}
454
            WHERE "bus0" IN (
455
            SELECT bus_id FROM grid.egon_etrago_bus
456
                WHERE country != 'DE'
457
                AND scn_name = '{scn_name}'
458
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
459
            AND "bus1" IN (
460
            SELECT bus_id FROM grid.egon_etrago_bus
461
                WHERE country != 'DE'
462
                AND scn_name = '{scn_name}'
463
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
464
            );
465
466
467
            DELETE FROM {"grid.egon_etrago_" + comp}
468
            WHERE scn_name = '{scn_name}'
469
            AND "bus0" IN (
470
            SELECT bus_id FROM grid.egon_etrago_bus
471
                WHERE country != 'DE'
472
                AND scn_name = '{scn_name}'
473
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
474
            AND "bus1" IN (
475
            SELECT bus_id FROM grid.egon_etrago_bus
476
                WHERE country != 'DE'
477
                AND scn_name = '{scn_name}'
478
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
479
            ;"""
480
        )
481
482
    db.execute_sql(
483
        f"""
484
        DELETE FROM grid.egon_etrago_bus
485
        WHERE scn_name = '{scn_name}'
486
        AND country <> 'DE'
487
        AND carrier <> 'AC'
488
        """
489
    )
490
491
492
def electrical_neighbours_egon100():
493
    if "eGon100RE" in egon.data.config.settings()["egon-data"]["--scenarios"]:
494
        neighbor_reduction()
495
496
    else:
497
        print(
498
            "eGon100RE is not in the list of created scenarios, this task is skipped."
499
        )
500
501
502
def combine_decentral_and_rural_heat(network_solved, network_prepared):
503
504
    for comp in network_solved.iterate_components():
505
506
        if comp.name in ["Bus", "Link", "Store"]:
507
            urban_decentral = comp.df[
508
                comp.df.carrier.str.contains("urban decentral")
509
            ]
510
            rural = comp.df[comp.df.carrier.str.contains("rural")]
511
            for i, row in urban_decentral.iterrows():
512
                if not "DE" in i:
513
                    if comp.name in ["Bus"]:
514
                        network_solved.remove("Bus", i)
515
                    if comp.name in ["Link", "Generator"]:
516
                        if (
517
                            i.replace("urban decentral", "rural")
518
                            in rural.index
519
                        ):
520
                            rural.loc[
521
                                i.replace("urban decentral", "rural"),
522
                                "p_nom_opt",
523
                            ] += urban_decentral.loc[i, "p_nom_opt"]
524
                            rural.loc[
525
                                i.replace("urban decentral", "rural"), "p_nom"
526
                            ] += urban_decentral.loc[i, "p_nom"]
527
                            network_solved.remove(comp.name, i)
528
                        else:
529
                            print(i)
530
                            comp.df.loc[i, "bus0"] = comp.df.loc[
531
                                i, "bus0"
532
                            ].replace("urban decentral", "rural")
533
                            comp.df.loc[i, "bus1"] = comp.df.loc[
534
                                i, "bus1"
535
                            ].replace("urban decentral", "rural")
536
                            comp.df.loc[i, "carrier"] = comp.df.loc[
537
                                i, "carrier"
538
                            ].replace("urban decentral", "rural")
539
                    if comp.name in ["Store"]:
540
                        if (
541
                            i.replace("urban decentral", "rural")
542
                            in rural.index
543
                        ):
544
                            rural.loc[
545
                                i.replace("urban decentral", "rural"),
546
                                "e_nom_opt",
547
                            ] += urban_decentral.loc[i, "e_nom_opt"]
548
                            rural.loc[
549
                                i.replace("urban decentral", "rural"), "e_nom"
550
                            ] += urban_decentral.loc[i, "e_nom"]
551
                            network_solved.remove(comp.name, i)
552
553
                        else:
554
                            print(i)
555
                            network_solved.stores.loc[i, "bus"] = (
556
                                network_solved.stores.loc[i, "bus"].replace(
557
                                    "urban decentral", "rural"
558
                                )
559
                            )
560
                            network_solved.stores.loc[i, "carrier"] = (
561
                                "rural water tanks"
562
                            )
563
564
    urban_decentral_loads = network_prepared.loads[
565
        network_prepared.loads.carrier.str.contains("urban decentral")
566
    ]
567
568
    for i, row in urban_decentral_loads.iterrows():
569
        if i in network_prepared.loads_t.p_set.columns:
570
            network_prepared.loads_t.p_set[
571
                i.replace("urban decentral", "rural")
572
            ] += network_prepared.loads_t.p_set[i]
573
    network_prepared.mremove("Load", urban_decentral_loads.index)
574
575
    return network_prepared, network_solved
576
577
578
def neighbor_reduction():
579
    network_solved = read_network()
580
    network_prepared = prepared_network(planning_horizon="2045")
581
582
    # network.links.drop("pipe_retrofit", axis="columns", inplace=True)
583
584
    wanted_countries = [
585
        "DE",
586
        "AT",
587
        "CH",
588
        "CZ",
589
        "PL",
590
        "SE",
591
        "NO",
592
        "DK",
593
        "GB",
594
        "NL",
595
        "BE",
596
        "FR",
597
        "LU",
598
    ]
599
600
    foreign_buses = network_solved.buses[
601
        (~network_solved.buses.index.str.contains("|".join(wanted_countries)))
602
        | (network_solved.buses.index.str.contains("FR6"))
603
    ]
604
    network_solved.buses = network_solved.buses.drop(
605
        network_solved.buses.loc[foreign_buses.index].index
606
    )
607
608
    # Add H2 demand of Fischer-Tropsch process and methanolisation
609
    # to industrial H2 demands
610
    industrial_hydrogen = network_prepared.loads.loc[
611
        network_prepared.loads.carrier == "H2 for industry"
612
    ]
613
    fischer_tropsch = (
614
        network_solved.links_t.p0[
615
            network_solved.links.loc[
616
                network_solved.links.carrier == "Fischer-Tropsch"
617
            ].index
618
        ]
619
        .mul(network_solved.snapshot_weightings.generators, axis=0)
620
        .sum()
621
    )
622
    methanolisation = (
623
        network_solved.links_t.p0[
624
            network_solved.links.loc[
625
                network_solved.links.carrier == "methanolisation"
626
            ].index
627
        ]
628
        .mul(network_solved.snapshot_weightings.generators, axis=0)
629
        .sum()
630
    )
631
    for i, row in industrial_hydrogen.iterrows():
632
        network_prepared.loads.loc[i, "p_set"] += (
633
            fischer_tropsch[
634
                fischer_tropsch.index.str.startswith(row.bus[:5])
635
            ].sum()
636
            / 8760
637
        )
638
        network_prepared.loads.loc[i, "p_set"] += (
639
            methanolisation[
640
                methanolisation.index.str.startswith(row.bus[:5])
641
            ].sum()
642
            / 8760
643
        )
644
    # drop foreign lines and links from the 2nd row
645
646
    network_solved.lines = network_solved.lines.drop(
647
        network_solved.lines[
648
            (
649
                network_solved.lines["bus0"].isin(network_solved.buses.index)
650
                == False
651
            )
652
            & (
653
                network_solved.lines["bus1"].isin(network_solved.buses.index)
654
                == False
655
            )
656
        ].index
657
    )
658
659
    # select all lines which have at bus1 the bus which is kept
660
    lines_cb_1 = network_solved.lines[
661
        (
662
            network_solved.lines["bus0"].isin(network_solved.buses.index)
663
            == False
664
        )
665
    ]
666
667
    # create a load at bus1 with the line's hourly loading
668
    for i, k in zip(lines_cb_1.bus1.values, lines_cb_1.index):
669
670
        # Copy loading of lines into hourly resolution
671
        pset = pd.Series(
672
            index=network_prepared.snapshots,
673
            data=network_solved.lines_t.p1[k].resample("H").ffill(),
674
        )
675
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
676
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
677
678
        # Loads are all imported from the prepared network in the end
679
        network_prepared.add(
680
            "Load",
681
            "slack_fix " + i + " " + k,
682
            bus=i,
683
            p_set=pset,
684
            carrier=lines_cb_1.loc[k, "carrier"],
685
        )
686
687
    # select all lines which have at bus0 the bus which is kept
688
    lines_cb_0 = network_solved.lines[
689
        (
690
            network_solved.lines["bus1"].isin(network_solved.buses.index)
691
            == False
692
        )
693
    ]
694
695
    # create a load at bus0 with the line's hourly loading
696
    for i, k in zip(lines_cb_0.bus0.values, lines_cb_0.index):
697
        # Copy loading of lines into hourly resolution
698
        pset = pd.Series(
699
            index=network_prepared.snapshots,
700
            data=network_solved.lines_t.p0[k].resample("H").ffill(),
701
        )
702
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
703
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
704
705
        network_prepared.add(
706
            "Load",
707
            "slack_fix " + i + " " + k,
708
            bus=i,
709
            p_set=pset,
710
            carrier=lines_cb_0.loc[k, "carrier"],
711
        )
712
713
    # do the same for links
714
    network_solved.mremove(
715
        "Link",
716
        network_solved.links[
717
            (~network_solved.links.bus0.isin(network_solved.buses.index))
718
            | (~network_solved.links.bus1.isin(network_solved.buses.index))
719
        ].index,
720
    )
721
722
    # select all links which have at bus1 the bus which is kept
723
    links_cb_1 = network_solved.links[
724
        (
725
            network_solved.links["bus0"].isin(network_solved.buses.index)
726
            == False
727
        )
728
    ]
729
730
    # create a load at bus1 with the link's hourly loading
731
    for i, k in zip(links_cb_1.bus1.values, links_cb_1.index):
732
        pset = pd.Series(
733
            index=network_prepared.snapshots,
734
            data=network_solved.links_t.p1[k].resample("H").ffill(),
735
        )
736
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
737
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
738
739
        network_prepared.add(
740
            "Load",
741
            "slack_fix_links " + i + " " + k,
742
            bus=i,
743
            p_set=pset,
744
            carrier=links_cb_1.loc[k, "carrier"],
745
        )
746
747
    # select all links which have at bus0 the bus which is kept
748
    links_cb_0 = network_solved.links[
749
        (
750
            network_solved.links["bus1"].isin(network_solved.buses.index)
751
            == False
752
        )
753
    ]
754
755
    # create a load at bus0 with the link's hourly loading
756
    for i, k in zip(links_cb_0.bus0.values, links_cb_0.index):
757
        pset = pd.Series(
758
            index=network_prepared.snapshots,
759
            data=network_solved.links_t.p0[k].resample("H").ffill(),
760
        )
761
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
762
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
763
764
        network_prepared.add(
765
            "Load",
766
            "slack_fix_links " + i + " " + k,
767
            bus=i,
768
            p_set=pset,
769
            carrier=links_cb_0.carrier[k],
770
        )
771
772
    # drop remaining foreign components
773
    for comp in network_solved.iterate_components():
774
        if "bus0" in comp.df.columns:
775
            network_solved.mremove(
776
                comp.name,
777
                comp.df[~comp.df.bus0.isin(network_solved.buses.index)].index,
778
            )
779
            network_solved.mremove(
780
                comp.name,
781
                comp.df[~comp.df.bus1.isin(network_solved.buses.index)].index,
782
            )
783
        elif "bus" in comp.df.columns:
784
            network_solved.mremove(
785
                comp.name,
786
                comp.df[~comp.df.bus.isin(network_solved.buses.index)].index,
787
            )
788
789
    # Combine urban decentral and rural heat
790
    network_prepared, network_solved = combine_decentral_and_rural_heat(
791
        network_solved, network_prepared
792
    )
793
794
    # writing components of neighboring countries to etrago tables
795
796
    # Set country tag for all buses
797
    network_solved.buses.country = network_solved.buses.index.str[:2]
798
    neighbors = network_solved.buses[network_solved.buses.country != "DE"]
799
800
    neighbors["new_index"] = (
801
        db.next_etrago_id("bus") + neighbors.reset_index().index
802
    )
803
804
    # Use index of AC buses created by electrical_neigbors
805
    foreign_ac_buses = db.select_dataframe(
806
        """
807
        SELECT * FROM grid.egon_etrago_bus
808
        WHERE carrier = 'AC' AND v_nom = 380
809
        AND country!= 'DE' AND scn_name ='eGon100RE'
810
        AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data)
811
        """
812
    )
813
    buses_with_defined_id = neighbors[
814
        (neighbors.carrier == "AC")
815
        & (neighbors.country.isin(foreign_ac_buses.country.values))
816
    ].index
817
    neighbors.loc[buses_with_defined_id, "new_index"] = (
818
        foreign_ac_buses.set_index("x")
819
        .loc[neighbors.loc[buses_with_defined_id, "x"]]
820
        .bus_id.values
821
    )
822
823
    # lines, the foreign crossborder lines
824
    # (without crossborder lines to Germany!)
825
826
    neighbor_lines = network_solved.lines[
827
        network_solved.lines.bus0.isin(neighbors.index)
828
        & network_solved.lines.bus1.isin(neighbors.index)
829
    ]
830
    if not network_solved.lines_t["s_max_pu"].empty:
831
        neighbor_lines_t = network_prepared.lines_t["s_max_pu"][
832
            neighbor_lines.index
833
        ]
834
835
    neighbor_lines.reset_index(inplace=True)
836
    neighbor_lines.bus0 = (
837
        neighbors.loc[neighbor_lines.bus0, "new_index"].reset_index().new_index
838
    )
839
    neighbor_lines.bus1 = (
840
        neighbors.loc[neighbor_lines.bus1, "new_index"].reset_index().new_index
841
    )
842
    neighbor_lines.index += db.next_etrago_id("line")
843
844
    if not network_solved.lines_t["s_max_pu"].empty:
845
        for i in neighbor_lines_t.columns:
0 ignored issues
show
introduced by
The variable neighbor_lines_t does not seem to be defined in case BooleanNotNode on line 830 is False. Are you sure this can never be the case?
Loading history...
846
            new_index = neighbor_lines[neighbor_lines["name"] == i].index
847
            neighbor_lines_t.rename(columns={i: new_index[0]}, inplace=True)
848
849
    # links
850
    neighbor_links = network_solved.links[
851
        network_solved.links.bus0.isin(neighbors.index)
852
        & network_solved.links.bus1.isin(neighbors.index)
853
    ]
854
855
    neighbor_links.reset_index(inplace=True)
856
    neighbor_links.bus0 = (
857
        neighbors.loc[neighbor_links.bus0, "new_index"].reset_index().new_index
858
    )
859
    neighbor_links.bus1 = (
860
        neighbors.loc[neighbor_links.bus1, "new_index"].reset_index().new_index
861
    )
862
    neighbor_links.index += db.next_etrago_id("link")
863
864
    # generators
865
    neighbor_gens = network_solved.generators[
866
        network_solved.generators.bus.isin(neighbors.index)
867
    ]
868
    neighbor_gens_t = network_prepared.generators_t["p_max_pu"][
869
        neighbor_gens[
870
            neighbor_gens.index.isin(
871
                network_prepared.generators_t["p_max_pu"].columns
872
            )
873
        ].index
874
    ]
875
876
    gen_time = [
877
        "solar",
878
        "onwind",
879
        "solar rooftop",
880
        "offwind-ac",
881
        "offwind-dc",
882
        "solar-hsat",
883
        "urban central solar thermal",
884
        "rural solar thermal",
885
        "offwind-float",
886
    ]
887
888
    missing_gent = neighbor_gens[
889
        neighbor_gens["carrier"].isin(gen_time)
890
        & ~neighbor_gens.index.isin(neighbor_gens_t.columns)
891
    ].index
892
893
    gen_timeseries = network_prepared.generators_t["p_max_pu"].copy()
894
    for mgt in missing_gent:  # mgt: missing generator timeseries
895
        try:
896
            neighbor_gens_t[mgt] = gen_timeseries.loc[:, mgt[0:-5]]
897
        except:
898
            print(f"There are not timeseries for {mgt}")
899
900
    neighbor_gens.reset_index(inplace=True)
901
    neighbor_gens.bus = (
902
        neighbors.loc[neighbor_gens.bus, "new_index"].reset_index().new_index
903
    )
904
    neighbor_gens.index += db.next_etrago_id("generator")
905
906
    for i in neighbor_gens_t.columns:
907
        new_index = neighbor_gens[neighbor_gens["Generator"] == i].index
908
        neighbor_gens_t.rename(columns={i: new_index[0]}, inplace=True)
909
910
    # loads
911
    # imported from prenetwork in 1h-resolution
912
    neighbor_loads = network_prepared.loads[
913
        network_prepared.loads.bus.isin(neighbors.index)
914
    ]
915
    neighbor_loads_t_index = neighbor_loads.index[
916
        neighbor_loads.index.isin(network_prepared.loads_t.p_set.columns)
917
    ]
918
    neighbor_loads_t = network_prepared.loads_t["p_set"][
919
        neighbor_loads_t_index
920
    ]
921
922
    neighbor_loads.reset_index(inplace=True)
923
    neighbor_loads.bus = (
924
        neighbors.loc[neighbor_loads.bus, "new_index"].reset_index().new_index
925
    )
926
    neighbor_loads.index += db.next_etrago_id("load")
927
928
    for i in neighbor_loads_t.columns:
929
        new_index = neighbor_loads[neighbor_loads["Load"] == i].index
930
        neighbor_loads_t.rename(columns={i: new_index[0]}, inplace=True)
931
932
    # stores
933
    neighbor_stores = network_solved.stores[
934
        network_solved.stores.bus.isin(neighbors.index)
935
    ]
936
    neighbor_stores_t_index = neighbor_stores.index[
937
        neighbor_stores.index.isin(network_solved.stores_t.e_min_pu.columns)
938
    ]
939
    neighbor_stores_t = network_prepared.stores_t["e_min_pu"][
940
        neighbor_stores_t_index
941
    ]
942
943
    neighbor_stores.reset_index(inplace=True)
944
    neighbor_stores.bus = (
945
        neighbors.loc[neighbor_stores.bus, "new_index"].reset_index().new_index
946
    )
947
    neighbor_stores.index += db.next_etrago_id("store")
948
949
    for i in neighbor_stores_t.columns:
950
        new_index = neighbor_stores[neighbor_stores["Store"] == i].index
951
        neighbor_stores_t.rename(columns={i: new_index[0]}, inplace=True)
952
953
    # storage_units
954
    neighbor_storage = network_solved.storage_units[
955
        network_solved.storage_units.bus.isin(neighbors.index)
956
    ]
957
    neighbor_storage_t_index = neighbor_storage.index[
958
        neighbor_storage.index.isin(
959
            network_solved.storage_units_t.inflow.columns
960
        )
961
    ]
962
    neighbor_storage_t = network_prepared.storage_units_t["inflow"][
963
        neighbor_storage_t_index
964
    ]
965
966
    neighbor_storage.reset_index(inplace=True)
967
    neighbor_storage.bus = (
968
        neighbors.loc[neighbor_storage.bus, "new_index"]
969
        .reset_index()
970
        .new_index
971
    )
972
    neighbor_storage.index += db.next_etrago_id("storage")
973
974
    for i in neighbor_storage_t.columns:
975
        new_index = neighbor_storage[
976
            neighbor_storage["StorageUnit"] == i
977
        ].index
978
        neighbor_storage_t.rename(columns={i: new_index[0]}, inplace=True)
979
980
    # Connect to local database
981
    engine = db.engine()
982
983
    neighbors["scn_name"] = "eGon100RE"
984
    neighbors.index = neighbors["new_index"]
985
986
    # Correct geometry for non AC buses
987
    carriers = set(neighbors.carrier.to_list())
988
    carriers = [e for e in carriers if e not in ("AC")]
989
    non_AC_neighbors = pd.DataFrame()
990
    for c in carriers:
991
        c_neighbors = neighbors[neighbors.carrier == c].set_index(
992
            "location", drop=False
993
        )
994
        for i in ["x", "y"]:
995
            c_neighbors = c_neighbors.drop(i, axis=1)
996
        coordinates = neighbors[neighbors.carrier == "AC"][
997
            ["location", "x", "y"]
998
        ].set_index("location")
999
        c_neighbors = pd.concat([coordinates, c_neighbors], axis=1).set_index(
1000
            "new_index", drop=False
1001
        )
1002
        non_AC_neighbors = pd.concat([non_AC_neighbors, c_neighbors])
1003
1004
    neighbors = pd.concat(
1005
        [neighbors[neighbors.carrier == "AC"], non_AC_neighbors]
1006
    )
1007
1008
    for i in [
1009
        "new_index",
1010
        "control",
1011
        "generator",
1012
        "location",
1013
        "sub_network",
1014
        "unit",
1015
        "substation_lv",
1016
        "substation_off",
1017
    ]:
1018
        neighbors = neighbors.drop(i, axis=1)
1019
1020
    # Add geometry column
1021
    neighbors = (
1022
        gpd.GeoDataFrame(
1023
            neighbors, geometry=gpd.points_from_xy(neighbors.x, neighbors.y)
1024
        )
1025
        .rename_geometry("geom")
1026
        .set_crs(4326)
1027
    )
1028
1029
    # Unify carrier names
1030
    neighbors.carrier = neighbors.carrier.str.replace(" ", "_")
1031
    neighbors.carrier.replace(
1032
        {
1033
            "gas": "CH4",
1034
            "gas_for_industry": "CH4_for_industry",
1035
            "urban_central_heat": "central_heat",
1036
            "EV_battery": "Li_ion",
1037
            "urban_central_water_tanks": "central_heat_store",
1038
            "rural_water_tanks": "rural_heat_store",
1039
        },
1040
        inplace=True,
1041
    )
1042
1043
    neighbors[~neighbors.carrier.isin(["AC"])].to_postgis(
1044
        "egon_etrago_bus",
1045
        engine,
1046
        schema="grid",
1047
        if_exists="append",
1048
        index=True,
1049
        index_label="bus_id",
1050
    )
1051
1052
    # prepare and write neighboring crossborder lines to etrago tables
1053
    def lines_to_etrago(neighbor_lines=neighbor_lines, scn="eGon100RE"):
1054
        neighbor_lines["scn_name"] = scn
1055
        neighbor_lines["cables"] = 3 * neighbor_lines["num_parallel"].astype(
1056
            int
1057
        )
1058
        neighbor_lines["s_nom"] = neighbor_lines["s_nom_min"]
1059
1060
        for i in [
1061
            "Line",
1062
            "x_pu_eff",
1063
            "r_pu_eff",
1064
            "sub_network",
1065
            "x_pu",
1066
            "r_pu",
1067
            "g_pu",
1068
            "b_pu",
1069
            "s_nom_opt",
1070
            "i_nom",
1071
            "dc",
1072
        ]:
1073
            neighbor_lines = neighbor_lines.drop(i, axis=1)
1074
1075
        # Define geometry and add to lines dataframe as 'topo'
1076
        gdf = gpd.GeoDataFrame(index=neighbor_lines.index)
1077
        gdf["geom_bus0"] = neighbors.geom[neighbor_lines.bus0].values
1078
        gdf["geom_bus1"] = neighbors.geom[neighbor_lines.bus1].values
1079
        gdf["geometry"] = gdf.apply(
1080
            lambda x: LineString([x["geom_bus0"], x["geom_bus1"]]), axis=1
1081
        )
1082
1083
        neighbor_lines = (
1084
            gpd.GeoDataFrame(neighbor_lines, geometry=gdf["geometry"])
1085
            .rename_geometry("topo")
1086
            .set_crs(4326)
1087
        )
1088
1089
        neighbor_lines["lifetime"] = get_sector_parameters("electricity", scn)[
1090
            "lifetime"
1091
        ]["ac_ehv_overhead_line"]
1092
1093
        neighbor_lines.to_postgis(
1094
            "egon_etrago_line",
1095
            engine,
1096
            schema="grid",
1097
            if_exists="append",
1098
            index=True,
1099
            index_label="line_id",
1100
        )
1101
1102
    lines_to_etrago(neighbor_lines=neighbor_lines, scn="eGon100RE")
1103
1104
    def links_to_etrago(neighbor_links, scn="eGon100RE", extendable=True):
1105
        """Prepare and write neighboring crossborder links to eTraGo table
1106
1107
        This function prepare the neighboring crossborder links
1108
        generated the PyPSA-eur-sec (p-e-s) run by:
1109
          * Delete the useless columns
1110
          * If extendable is false only (non default case):
1111
              * Replace p_nom = 0 with the p_nom_op values (arrising
1112
                from the p-e-s optimisation)
1113
              * Setting p_nom_extendable to false
1114
          * Add geomtry to the links: 'geom' and 'topo' columns
1115
          * Change the name of the carriers to have the consistent in
1116
            eGon-data
1117
1118
        The function insert then the link to the eTraGo table and has
1119
        no return.
1120
1121
        Parameters
1122
        ----------
1123
        neighbor_links : pandas.DataFrame
1124
            Dataframe containing the neighboring crossborder links
1125
        scn_name : str
1126
            Name of the scenario
1127
        extendable : bool
1128
            Boolean expressing if the links should be extendable or not
1129
1130
        Returns
1131
        -------
1132
        None
1133
1134
        """
1135
        neighbor_links["scn_name"] = scn
1136
1137
        dropped_carriers = [
1138
            "Link",
1139
            "geometry",
1140
            "tags",
1141
            "under_construction",
1142
            "underground",
1143
            "underwater_fraction",
1144
            "bus2",
1145
            "bus3",
1146
            "bus4",
1147
            "efficiency2",
1148
            "efficiency3",
1149
            "efficiency4",
1150
            "lifetime",
1151
            "pipe_retrofit",
1152
            "committable",
1153
            "start_up_cost",
1154
            "shut_down_cost",
1155
            "min_up_time",
1156
            "min_down_time",
1157
            "up_time_before",
1158
            "down_time_before",
1159
            "ramp_limit_up",
1160
            "ramp_limit_down",
1161
            "ramp_limit_start_up",
1162
            "ramp_limit_shut_down",
1163
            "length_original",
1164
            "reversed",
1165
            "location",
1166
            "project_status",
1167
            "dc",
1168
            "voltage",
1169
        ]
1170
1171
        if extendable:
1172
            dropped_carriers.append("p_nom_opt")
1173
            neighbor_links = neighbor_links.drop(
1174
                columns=dropped_carriers,
1175
                errors="ignore",
1176
            )
1177
1178
        else:
1179
            dropped_carriers.append("p_nom")
1180
            dropped_carriers.append("p_nom_extendable")
1181
            neighbor_links = neighbor_links.drop(
1182
                columns=dropped_carriers,
1183
                errors="ignore",
1184
            )
1185
            neighbor_links = neighbor_links.rename(
1186
                columns={"p_nom_opt": "p_nom"}
1187
            )
1188
            neighbor_links["p_nom_extendable"] = False
1189
1190
        if neighbor_links.empty:
1191
            print("No links selected")
1192
            return
1193
1194
        # Define geometry and add to lines dataframe as 'topo'
1195
        gdf = gpd.GeoDataFrame(
1196
            index=neighbor_links.index,
1197
            data={
1198
                "geom_bus0": neighbors.loc[neighbor_links.bus0, "geom"].values,
1199
                "geom_bus1": neighbors.loc[neighbor_links.bus1, "geom"].values,
1200
            },
1201
        )
1202
1203
        gdf["geometry"] = gdf.apply(
1204
            lambda x: LineString([x["geom_bus0"], x["geom_bus1"]]), axis=1
1205
        )
1206
1207
        neighbor_links = (
1208
            gpd.GeoDataFrame(neighbor_links, geometry=gdf["geometry"])
1209
            .rename_geometry("topo")
1210
            .set_crs(4326)
1211
        )
1212
1213
        # Unify carrier names
1214
        neighbor_links.carrier = neighbor_links.carrier.str.replace(" ", "_")
1215
1216
        neighbor_links.carrier.replace(
1217
            {
1218
                "H2_Electrolysis": "power_to_H2",
1219
                "H2_Fuel_Cell": "H2_to_power",
1220
                "H2_pipeline_retrofitted": "H2_retrofit",
1221
                "SMR": "CH4_to_H2",
1222
                "Sabatier": "H2_to_CH4",
1223
                "gas_for_industry": "CH4_for_industry",
1224
                "gas_pipeline": "CH4",
1225
                "urban_central_gas_boiler": "central_gas_boiler",
1226
                "urban_central_resistive_heater": "central_resistive_heater",
1227
                "urban_central_water_tanks_charger": "central_heat_store_charger",
1228
                "urban_central_water_tanks_discharger": "central_heat_store_discharger",
1229
                "rural_water_tanks_charger": "rural_heat_store_charger",
1230
                "rural_water_tanks_discharger": "rural_heat_store_discharger",
1231
                "urban_central_gas_CHP": "central_gas_CHP",
1232
                "urban_central_air_heat_pump": "central_heat_pump",
1233
                "rural_ground_heat_pump": "rural_heat_pump",
1234
            },
1235
            inplace=True,
1236
        )
1237
1238
        for c in [
1239
            "H2_to_CH4",
1240
            "H2_to_power",
1241
            "power_to_H2",
1242
            "CH4_to_H2",
1243
        ]:
1244
            neighbor_links.loc[
1245
                (neighbor_links.carrier == c),
1246
                "lifetime",
1247
            ] = get_sector_parameters("gas", "eGon100RE")["lifetime"][c]
1248
1249
        neighbor_links.to_postgis(
1250
            "egon_etrago_link",
1251
            engine,
1252
            schema="grid",
1253
            if_exists="append",
1254
            index=True,
1255
            index_label="link_id",
1256
        )
1257
1258
    extendable_links_carriers = [
1259
        "battery charger",
1260
        "battery discharger",
1261
        "home battery charger",
1262
        "home battery discharger",
1263
        "rural water tanks charger",
1264
        "rural water tanks discharger",
1265
        "urban central water tanks charger",
1266
        "urban central water tanks discharger",
1267
        "urban decentral water tanks charger",
1268
        "urban decentral water tanks discharger",
1269
        "H2 Electrolysis",
1270
        "H2 Fuel Cell",
1271
        "SMR",
1272
        "Sabatier",
1273
    ]
1274
1275
    # delete unwanted carriers for eTraGo
1276
    excluded_carriers = [
1277
        "gas for industry CC",
1278
        "SMR CC",
1279
        "DAC",
1280
    ]
1281
    neighbor_links = neighbor_links[
1282
        ~neighbor_links.carrier.isin(excluded_carriers)
1283
    ]
1284
1285
    # Combine CHP_CC and CHP
1286
    chp_cc = neighbor_links[
1287
        neighbor_links.carrier == "urban central gas CHP CC"
1288
    ]
1289
    for index, row in chp_cc.iterrows():
1290
        neighbor_links.loc[
1291
            neighbor_links.Link == row.Link.replace("CHP CC", "CHP"),
1292
            "p_nom_opt",
1293
        ] += row.p_nom_opt
1294
        neighbor_links.loc[
1295
            neighbor_links.Link == row.Link.replace("CHP CC", "CHP"), "p_nom"
1296
        ] += row.p_nom
1297
        neighbor_links.drop(index, inplace=True)
1298
1299
    # Combine heat pumps
1300
    # Like in Germany, there are air heat pumps in central heat grids
1301
    # and ground heat pumps in rural areas
1302
    rural_air = neighbor_links[neighbor_links.carrier == "rural air heat pump"]
1303
    for index, row in rural_air.iterrows():
1304
        neighbor_links.loc[
1305
            neighbor_links.Link == row.Link.replace("air", "ground"),
1306
            "p_nom_opt",
1307
        ] += row.p_nom_opt
1308
        neighbor_links.loc[
1309
            neighbor_links.Link == row.Link.replace("air", "ground"), "p_nom"
1310
        ] += row.p_nom
1311
        neighbor_links.drop(index, inplace=True)
1312
    links_to_etrago(
1313
        neighbor_links[neighbor_links.carrier.isin(extendable_links_carriers)],
1314
        "eGon100RE",
1315
    )
1316
    links_to_etrago(
1317
        neighbor_links[
1318
            ~neighbor_links.carrier.isin(extendable_links_carriers)
1319
        ],
1320
        "eGon100RE",
1321
        extendable=False,
1322
    )
1323
    # Include links time-series
1324
    # For heat_pumps
1325
    hp = neighbor_links[neighbor_links["carrier"].str.contains("heat pump")]
1326
1327
    neighbor_eff_t = network_prepared.links_t["efficiency"][
1328
        hp[hp.Link.isin(network_prepared.links_t["efficiency"].columns)].index
1329
    ]
1330
1331
    missing_hp = hp[~hp["Link"].isin(neighbor_eff_t.columns)].Link
1332
1333
    eff_timeseries = network_prepared.links_t["efficiency"].copy()
1334
    for met in missing_hp:  # met: missing efficiency timeseries
1335
        try:
1336
            neighbor_eff_t[met] = eff_timeseries.loc[:, met[0:-5]]
1337
        except:
1338
            print(f"There are not timeseries for heat_pump {met}")
1339
1340
    for i in neighbor_eff_t.columns:
1341
        new_index = neighbor_links[neighbor_links["Link"] == i].index
1342
        neighbor_eff_t.rename(columns={i: new_index[0]}, inplace=True)
1343
1344
    # Include links time-series
1345
    # For ev_chargers
1346
    ev = neighbor_links[neighbor_links["carrier"].str.contains("BEV charger")]
1347
1348
    ev_p_max_pu = network_prepared.links_t["p_max_pu"][
1349
        ev[ev.Link.isin(network_prepared.links_t["p_max_pu"].columns)].index
1350
    ]
1351
1352
    missing_ev = ev[~ev["Link"].isin(ev_p_max_pu.columns)].Link
1353
1354
    ev_p_max_pu_timeseries = network_prepared.links_t["p_max_pu"].copy()
1355
    for mct in missing_ev:  # evt: missing charger timeseries
1356
        try:
1357
            ev_p_max_pu[mct] = ev_p_max_pu_timeseries.loc[:, mct[0:-5]]
1358
        except:
1359
            print(f"There are not timeseries for EV charger {mct}")
1360
1361
    for i in ev_p_max_pu.columns:
1362
        new_index = neighbor_links[neighbor_links["Link"] == i].index
1363
        ev_p_max_pu.rename(columns={i: new_index[0]}, inplace=True)
1364
1365
    # prepare neighboring generators for etrago tables
1366
    neighbor_gens["scn_name"] = "eGon100RE"
1367
    neighbor_gens["p_nom"] = neighbor_gens["p_nom_opt"]
1368
    neighbor_gens["p_nom_extendable"] = False
1369
1370
    # Unify carrier names
1371
    neighbor_gens.carrier = neighbor_gens.carrier.str.replace(" ", "_")
1372
1373
    neighbor_gens.carrier.replace(
1374
        {
1375
            "onwind": "wind_onshore",
1376
            "ror": "run_of_river",
1377
            "offwind-ac": "wind_offshore",
1378
            "offwind-dc": "wind_offshore",
1379
            "offwind-float": "wind_offshore",
1380
            "urban_central_solar_thermal": "urban_central_solar_thermal_collector",
1381
            "residential_rural_solar_thermal": "residential_rural_solar_thermal_collector",
1382
            "services_rural_solar_thermal": "services_rural_solar_thermal_collector",
1383
            "solar-hsat": "solar",
1384
        },
1385
        inplace=True,
1386
    )
1387
1388
    for i in [
1389
        "Generator",
1390
        "weight",
1391
        "lifetime",
1392
        "p_set",
1393
        "q_set",
1394
        "p_nom_opt",
1395
        "e_sum_min",
1396
        "e_sum_max",
1397
    ]:
1398
        neighbor_gens = neighbor_gens.drop(i, axis=1)
1399
1400
    neighbor_gens.to_sql(
1401
        "egon_etrago_generator",
1402
        engine,
1403
        schema="grid",
1404
        if_exists="append",
1405
        index=True,
1406
        index_label="generator_id",
1407
    )
1408
1409
    # prepare neighboring loads for etrago tables
1410
    neighbor_loads["scn_name"] = "eGon100RE"
1411
1412
    # Unify carrier names
1413
    neighbor_loads.carrier = neighbor_loads.carrier.str.replace(" ", "_")
1414
1415
    neighbor_loads.carrier.replace(
1416
        {
1417
            "electricity": "AC",
1418
            "DC": "AC",
1419
            "industry_electricity": "AC",
1420
            "H2_pipeline_retrofitted": "H2_system_boundary",
1421
            "gas_pipeline": "CH4_system_boundary",
1422
            "gas_for_industry": "CH4_for_industry",
1423
            "urban_central_heat": "central_heat",
1424
        },
1425
        inplace=True,
1426
    )
1427
1428
    neighbor_loads = neighbor_loads.drop(
1429
        columns=["Load"],
1430
        errors="ignore",
1431
    )
1432
1433
    neighbor_loads.to_sql(
1434
        "egon_etrago_load",
1435
        engine,
1436
        schema="grid",
1437
        if_exists="append",
1438
        index=True,
1439
        index_label="load_id",
1440
    )
1441
1442
    # prepare neighboring stores for etrago tables
1443
    neighbor_stores["scn_name"] = "eGon100RE"
1444
1445
    # Unify carrier names
1446
    neighbor_stores.carrier = neighbor_stores.carrier.str.replace(" ", "_")
1447
1448
    neighbor_stores.carrier.replace(
1449
        {
1450
            "Li_ion": "battery",
1451
            "gas": "CH4",
1452
            "urban_central_water_tanks": "central_heat_store",
1453
            "rural_water_tanks": "rural_heat_store",
1454
            "EV_battery": "battery_storage",
1455
        },
1456
        inplace=True,
1457
    )
1458
    neighbor_stores.loc[
1459
        (
1460
            (neighbor_stores.e_nom_max <= 1e9)
1461
            & (neighbor_stores.carrier == "H2_Store")
1462
        ),
1463
        "carrier",
1464
    ] = "H2_underground"
1465
    neighbor_stores.loc[
1466
        (
1467
            (neighbor_stores.e_nom_max > 1e9)
1468
            & (neighbor_stores.carrier == "H2_Store")
1469
        ),
1470
        "carrier",
1471
    ] = "H2_overground"
1472
1473
    for i in [
1474
        "Store",
1475
        "p_set",
1476
        "q_set",
1477
        "e_nom_opt",
1478
        "lifetime",
1479
        "e_initial_per_period",
1480
        "e_cyclic_per_period",
1481
        "location",
1482
    ]:
1483
        neighbor_stores = neighbor_stores.drop(i, axis=1, errors="ignore")
1484
1485
    for c in ["H2_underground", "H2_overground"]:
1486
        neighbor_stores.loc[
1487
            (neighbor_stores.carrier == c),
1488
            "lifetime",
1489
        ] = get_sector_parameters("gas", "eGon100RE")["lifetime"][c]
1490
1491
    neighbor_stores.to_sql(
1492
        "egon_etrago_store",
1493
        engine,
1494
        schema="grid",
1495
        if_exists="append",
1496
        index=True,
1497
        index_label="store_id",
1498
    )
1499
1500
    # prepare neighboring storage_units for etrago tables
1501
    neighbor_storage["scn_name"] = "eGon100RE"
1502
1503
    # Unify carrier names
1504
    neighbor_storage.carrier = neighbor_storage.carrier.str.replace(" ", "_")
1505
1506
    neighbor_storage.carrier.replace(
1507
        {"PHS": "pumped_hydro", "hydro": "reservoir"}, inplace=True
1508
    )
1509
1510
    for i in [
1511
        "StorageUnit",
1512
        "p_nom_opt",
1513
        "state_of_charge_initial_per_period",
1514
        "cyclic_state_of_charge_per_period",
1515
    ]:
1516
        neighbor_storage = neighbor_storage.drop(i, axis=1, errors="ignore")
1517
1518
    neighbor_storage.to_sql(
1519
        "egon_etrago_storage",
1520
        engine,
1521
        schema="grid",
1522
        if_exists="append",
1523
        index=True,
1524
        index_label="storage_id",
1525
    )
1526
1527
    # writing neighboring loads_t p_sets to etrago tables
1528
1529
    neighbor_loads_t_etrago = pd.DataFrame(
1530
        columns=["scn_name", "temp_id", "p_set"],
1531
        index=neighbor_loads_t.columns,
1532
    )
1533
    neighbor_loads_t_etrago["scn_name"] = "eGon100RE"
1534
    neighbor_loads_t_etrago["temp_id"] = 1
1535
    for i in neighbor_loads_t.columns:
1536
        neighbor_loads_t_etrago["p_set"][i] = neighbor_loads_t[
1537
            i
1538
        ].values.tolist()
1539
1540
    neighbor_loads_t_etrago.to_sql(
1541
        "egon_etrago_load_timeseries",
1542
        engine,
1543
        schema="grid",
1544
        if_exists="append",
1545
        index=True,
1546
        index_label="load_id",
1547
    )
1548
1549
    # writing neighboring link_t efficiency and p_max_pu to etrago tables
1550
    neighbor_link_t_etrago = pd.DataFrame(
1551
        columns=["scn_name", "temp_id", "p_max_pu", "efficiency"],
1552
        index=neighbor_eff_t.columns.to_list() + ev_p_max_pu.columns.to_list(),
1553
    )
1554
    neighbor_link_t_etrago["scn_name"] = "eGon100RE"
1555
    neighbor_link_t_etrago["temp_id"] = 1
1556
    for i in neighbor_eff_t.columns:
1557
        neighbor_link_t_etrago["efficiency"][i] = neighbor_eff_t[
1558
            i
1559
        ].values.tolist()
1560
    for i in ev_p_max_pu.columns:
1561
        neighbor_link_t_etrago["p_max_pu"][i] = ev_p_max_pu[i].values.tolist()
1562
1563
    neighbor_link_t_etrago.to_sql(
1564
        "egon_etrago_link_timeseries",
1565
        engine,
1566
        schema="grid",
1567
        if_exists="append",
1568
        index=True,
1569
        index_label="link_id",
1570
    )
1571
1572
    # writing neighboring generator_t p_max_pu to etrago tables
1573
    neighbor_gens_t_etrago = pd.DataFrame(
1574
        columns=["scn_name", "temp_id", "p_max_pu"],
1575
        index=neighbor_gens_t.columns,
1576
    )
1577
    neighbor_gens_t_etrago["scn_name"] = "eGon100RE"
1578
    neighbor_gens_t_etrago["temp_id"] = 1
1579
    for i in neighbor_gens_t.columns:
1580
        neighbor_gens_t_etrago["p_max_pu"][i] = neighbor_gens_t[
1581
            i
1582
        ].values.tolist()
1583
1584
    neighbor_gens_t_etrago.to_sql(
1585
        "egon_etrago_generator_timeseries",
1586
        engine,
1587
        schema="grid",
1588
        if_exists="append",
1589
        index=True,
1590
        index_label="generator_id",
1591
    )
1592
1593
    # writing neighboring stores_t e_min_pu to etrago tables
1594
    neighbor_stores_t_etrago = pd.DataFrame(
1595
        columns=["scn_name", "temp_id", "e_min_pu"],
1596
        index=neighbor_stores_t.columns,
1597
    )
1598
    neighbor_stores_t_etrago["scn_name"] = "eGon100RE"
1599
    neighbor_stores_t_etrago["temp_id"] = 1
1600
    for i in neighbor_stores_t.columns:
1601
        neighbor_stores_t_etrago["e_min_pu"][i] = neighbor_stores_t[
1602
            i
1603
        ].values.tolist()
1604
1605
    neighbor_stores_t_etrago.to_sql(
1606
        "egon_etrago_store_timeseries",
1607
        engine,
1608
        schema="grid",
1609
        if_exists="append",
1610
        index=True,
1611
        index_label="store_id",
1612
    )
1613
1614
    # writing neighboring storage_units inflow to etrago tables
1615
    neighbor_storage_t_etrago = pd.DataFrame(
1616
        columns=["scn_name", "temp_id", "inflow"],
1617
        index=neighbor_storage_t.columns,
1618
    )
1619
    neighbor_storage_t_etrago["scn_name"] = "eGon100RE"
1620
    neighbor_storage_t_etrago["temp_id"] = 1
1621
    for i in neighbor_storage_t.columns:
1622
        neighbor_storage_t_etrago["inflow"][i] = neighbor_storage_t[
1623
            i
1624
        ].values.tolist()
1625
1626
    neighbor_storage_t_etrago.to_sql(
1627
        "egon_etrago_storage_timeseries",
1628
        engine,
1629
        schema="grid",
1630
        if_exists="append",
1631
        index=True,
1632
        index_label="storage_id",
1633
    )
1634
1635
    # writing neighboring lines_t s_max_pu to etrago tables
1636
    if not network_solved.lines_t["s_max_pu"].empty:
1637
        neighbor_lines_t_etrago = pd.DataFrame(
1638
            columns=["scn_name", "s_max_pu"], index=neighbor_lines_t.columns
1639
        )
1640
        neighbor_lines_t_etrago["scn_name"] = "eGon100RE"
1641
1642
        for i in neighbor_lines_t.columns:
1643
            neighbor_lines_t_etrago["s_max_pu"][i] = neighbor_lines_t[
1644
                i
1645
            ].values.tolist()
1646
1647
        neighbor_lines_t_etrago.to_sql(
1648
            "egon_etrago_line_timeseries",
1649
            engine,
1650
            schema="grid",
1651
            if_exists="append",
1652
            index=True,
1653
            index_label="line_id",
1654
        )
1655
1656
1657 View Code Duplication
def prepared_network(planning_horizon=3):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1658
    if egon.data.config.settings()["egon-data"]["--run-pypsa-eur"]:
1659
        with open(
1660
            __path__[0] + "/datasets/pypsaeur/config_prepare.yaml", "r"
1661
        ) as stream:
1662
            data_config = yaml.safe_load(stream)
1663
1664
        target_file = (
1665
            Path(".")
1666
            / "run-pypsa-eur"
1667
            / "pypsa-eur"
1668
            / "results"
1669
            / data_config["run"]["name"]
1670
            / "prenetworks"
1671
            / f"base_s_{data_config['scenario']['clusters'][0]}"
1672
            f"_l{data_config['scenario']['ll'][0]}"
1673
            f"_{data_config['scenario']['opts'][0]}"
1674
            f"_{data_config['scenario']['sector_opts'][0]}"
1675
            f"_{data_config['scenario']['planning_horizons'][planning_horizon]}.nc"
1676
        )
1677
1678
    else:
1679
        target_file = (
1680
            Path(".")
1681
            / "data_bundle_powerd_data"
1682
            / "pypsa_eur"
1683
            / "21122024_3h_clean_run"
1684
            / "results"
1685
            / "prenetworks"
1686
            / "prenetwork_post-manipulate_pre-solve"
1687
            / "base_s_39_lc1.25__cb40ex0-T-H-I-B-solar+p3-dist1_2045.nc"
1688
        )
1689
1690
    return pypsa.Network(target_file.absolute().as_posix())
1691
1692
1693
def overwrite_H2_pipeline_share():
1694
    """Overwrite retrofitted_CH4pipeline-to-H2pipeline_share value
1695
1696
    Overwrite retrofitted_CH4pipeline-to-H2pipeline_share in the
1697
    scenario parameter table if p-e-s is run.
1698
    This function write in the database and has no return.
1699
1700
    """
1701
    scn_name = "eGon100RE"
1702
    # Select source and target from dataset configuration
1703
    target = egon.data.config.datasets()["pypsa-eur-sec"]["target"]
1704
1705
    n = read_network()
1706
1707
    H2_pipelines = n.links[n.links["carrier"] == "H2 pipeline retrofitted"]
1708
    CH4_pipelines = n.links[n.links["carrier"] == "gas pipeline"]
1709
    H2_pipes_share = np.mean(
1710
        [
1711
            (i / j)
1712
            for i, j in zip(
1713
                H2_pipelines.p_nom_opt.to_list(), CH4_pipelines.p_nom.to_list()
1714
            )
1715
        ]
1716
    )
1717
    logger.info(
1718
        "retrofitted_CH4pipeline-to-H2pipeline_share = " + str(H2_pipes_share)
1719
    )
1720
1721
    parameters = db.select_dataframe(
1722
        f"""
1723
        SELECT *
1724
        FROM {target['scenario_parameters']['schema']}.{target['scenario_parameters']['table']}
1725
        WHERE name = '{scn_name}'
1726
        """
1727
    )
1728
1729
    gas_param = parameters.loc[0, "gas_parameters"]
1730
    gas_param["retrofitted_CH4pipeline-to-H2pipeline_share"] = H2_pipes_share
1731
    gas_param = json.dumps(gas_param)
1732
1733
    # Update data in db
1734
    db.execute_sql(
1735
        f"""
1736
    UPDATE {target['scenario_parameters']['schema']}.{target['scenario_parameters']['table']}
1737
    SET gas_parameters = '{gas_param}'
1738
    WHERE name = '{scn_name}';
1739
    """
1740
    )
1741
1742
1743
def update_electrical_timeseries_germany(network):
1744
    """Replace electrical demand time series in Germany with data from egon-data
1745
1746
    Parameters
1747
    ----------
1748
    network : pypsa.Network
1749
        Network including demand time series from pypsa-eur
1750
1751
    Returns
1752
    -------
1753
    network : pypsa.Network
1754
        Network including electrical demand time series in Germany from egon-data
1755
1756
    """
1757
    year = network.year
1758
    skip = network.snapshot_weightings.objective.iloc[0].astype("int")
1759
    df = pd.read_csv(
1760
        "input-pypsa-eur-sec/electrical_demand_timeseries_DE_eGon100RE.csv"
1761
    )
1762
1763
    annual_demand = pd.Series(index=[2019, 2037])
1764
    annual_demand_industry = pd.Series(index=[2019, 2037])
1765
    # Define values from status2019 for interpolation
1766
    # Residential and service (in TWh)
1767
    annual_demand.loc[2019] = 124.71 + 143.26
1768
    # Industry (in TWh)
1769
    annual_demand_industry.loc[2019] = 241.925
1770
1771
    # Define values from NEP 2023 scenario B 2037 for interpolation
1772
    # Residential and service (in TWh)
1773
    annual_demand.loc[2037] = 104 + 153.1
1774
    # Industry (in TWh)
1775
    annual_demand_industry.loc[2037] = 334.0
1776
1777
    # Set interpolated demands for years between 2019 and 2045
1778
    if year < 2037:
1779
        # Calculate annual demands for year by linear interpolating between
1780
        # 2019 and 2037
1781
        # Done seperatly for industry and residential and service to fit
1782
        # to pypsa-eurs structure
1783
        annual_rate = (annual_demand.loc[2037] - annual_demand.loc[2019]) / (
1784
            2037 - 2019
1785
        )
1786
        annual_demand_year = annual_demand.loc[2019] + annual_rate * (
1787
            year - 2019
1788
        )
1789
1790
        annual_rate_industry = (
1791
            annual_demand_industry.loc[2037] - annual_demand_industry.loc[2019]
1792
        ) / (2037 - 2019)
1793
        annual_demand_year_industry = annual_demand_industry.loc[
1794
            2019
1795
        ] + annual_rate_industry * (year - 2019)
1796
1797
        # Scale time series for 100% scenario with the annual demands
1798
        # The shape of the curve is taken from the 100% scenario since the
1799
        # same weather and calender year is used there
1800
        network.loads_t.p_set.loc[:, "DE0 0"] = (
1801
            df["residential_and_service"].loc[::skip]
1802
            / df["residential_and_service"].sum()
1803
            * annual_demand_year
1804
            * 1e6
1805
        ).values
1806
1807
        network.loads_t.p_set.loc[:, "DE0 0 industry electricity"] = (
1808
            df["industry"].loc[::skip]
1809
            / df["industry"].sum()
1810
            * annual_demand_year_industry
1811
            * 1e6
1812
        ).values
1813
1814
    elif year == 2045:
1815
        network.loads_t.p_set.loc[:, "DE0 0"] = df[
1816
            "residential_and_service"
1817
        ].loc[::skip]
1818
1819
        network.loads_t.p_set.loc[:, "DE0 0 industry electricity"] = (
1820
            df["industry"].loc[::skip].values
1821
        )
1822
1823
    else:
1824
        print(
1825
            "Scaling not implemented for years between 2037 and 2045 and beyond."
1826
        )
1827
        return
1828
1829
    network.loads.loc["DE0 0 industry electricity", "p_set"] = 0.0
1830
1831
    return network
1832
1833
1834
def geothermal_district_heating(network):
1835
    """Add the option to build geothermal power plants in district heating in Germany
1836
1837
    Parameters
1838
    ----------
1839
    network : pypsa.Network
1840
        Network from pypsa-eur without geothermal generators
1841
1842
    Returns
1843
    -------
1844
    network : pypsa.Network
1845
        Updated network with geothermal generators
1846
1847
    """
1848
1849
    costs_and_potentials = pd.read_csv(
1850
        "input-pypsa-eur-sec/geothermal_potential_germany.csv"
1851
    )
1852
1853
    network.add("Carrier", "urban central geo thermal")
1854
1855
    for i, row in costs_and_potentials.iterrows():
1856
        # Set lifetime of geothermal plant to 30 years based on:
1857
        # Ableitung eines Korridors für den Ausbau der erneuerbaren Wärme im Gebäudebereich,
1858
        # Beuth Hochschule für Technik, Berlin ifeu – Institut für Energie- und Umweltforschung Heidelberg GmbH
1859
        # Februar 2017
1860
        lifetime_geothermal = 30
1861
1862
        network.add(
1863
            "Generator",
1864
            f"DE0 0 urban central geo thermal {i}",
1865
            bus="DE0 0 urban central heat",
1866
            carrier="urban central geo thermal",
1867
            p_nom_extendable=True,
1868
            p_nom_max=row["potential [MW]"],
1869
            capital_cost=annualize_capital_costs(
1870
                row["cost [EUR/kW]"] * 1e6, lifetime_geothermal, 0.07
1871
            ),
1872
        )
1873
    return network
1874
1875
1876
def h2_overground_stores(network):
1877
    """Add hydrogen overground stores to each hydrogen node
1878
1879
    In pypsa-eur, only countries without the potential of underground hydrogen
1880
    stores have to option to build overground hydrogen tanks.
1881
    Overground stores are more expensive, but are not resitcted by the geological
1882
    potential. To allow higher hydrogen store capacities in each country, optional
1883
    hydogen overground tanks are also added to node with a potential for
1884
    underground stores.
1885
1886
    Parameters
1887
    ----------
1888
    network : pypsa.Network
1889
        Network without hydrogen overground stores at each hydrogen node
1890
1891
    Returns
1892
    -------
1893
    network : pypsa.Network
1894
        Network with hydrogen overground stores at each hydrogen node
1895
1896
    """
1897
1898
    underground_h2_stores = network.stores[
1899
        (network.stores.carrier == "H2 Store")
1900
        & (network.stores.e_nom_max != np.inf)
1901
    ]
1902
1903
    overground_h2_stores = network.stores[
1904
        (network.stores.carrier == "H2 Store")
1905
        & (network.stores.e_nom_max == np.inf)
1906
    ]
1907
1908
    network.madd(
1909
        "Store",
1910
        underground_h2_stores.bus + " overground Store",
1911
        bus=underground_h2_stores.bus.values,
1912
        e_nom_extendable=True,
1913
        e_cyclic=True,
1914
        carrier="H2 Store",
1915
        capital_cost=overground_h2_stores.capital_cost.mean(),
1916
    )
1917
1918
    return network
1919
1920
1921
def update_heat_timeseries_germany(network):
1922
    network.loads
1923
    # Import heat demand curves for Germany from eGon-data
1924
    df_egon_heat_demand = pd.read_csv(
1925
        "input-pypsa-eur-sec/heat_demand_timeseries_DE_eGon100RE.csv"
1926
    )
1927
1928
    # Replace heat demand curves in Germany with values from eGon-data
1929
    network.loads_t.p_set.loc[:, "DE1 0 rural heat"] = (
1930
        df_egon_heat_demand.loc[:, "residential rural"].values
1931
        + df_egon_heat_demand.loc[:, "service rural"].values
1932
    )
1933
1934
    network.loads_t.p_set.loc[:, "DE1 0 urban central heat"] = (
1935
        df_egon_heat_demand.loc[:, "urban central"].values
1936
    )
1937
1938
    return network
1939
1940
1941
def drop_biomass(network):
1942
    carrier = "biomass"
1943
1944
    for c in network.iterate_components():
1945
        network.mremove(c.name, c.df[c.df.index.str.contains(carrier)].index)
1946
    return network
1947
1948
1949
def postprocessing_biomass_2045():
1950
1951
    network = read_network()
1952
    network = drop_biomass(network)
1953
1954
    with open(
1955
        __path__[0] + "/datasets/pypsaeur/config_solve.yaml", "r"
1956
    ) as stream:
1957
        data_config = yaml.safe_load(stream)
1958
1959
    target_file = (
1960
        Path(".")
1961
        / "run-pypsa-eur"
1962
        / "pypsa-eur"
1963
        / "results"
1964
        / data_config["run"]["name"]
1965
        / "postnetworks"
1966
        / f"base_s_{data_config['scenario']['clusters'][0]}"
1967
        f"_l{data_config['scenario']['ll'][0]}"
1968
        f"_{data_config['scenario']['opts'][0]}"
1969
        f"_{data_config['scenario']['sector_opts'][0]}"
1970
        f"_{data_config['scenario']['planning_horizons'][3]}.nc"
1971
    )
1972
1973
    network.export_to_netcdf(target_file)
1974
1975
1976
def drop_urban_decentral_heat(network):
1977
    carrier = "urban decentral heat"
1978
1979
    # Add urban decentral heat demand to urban central heat demand
1980
    for country in network.loads.loc[
1981
        network.loads.carrier == carrier, "bus"
1982
    ].str[:5]:
1983
1984
        if f"{country} {carrier}" in network.loads_t.p_set.columns:
1985
            network.loads_t.p_set[
1986
                f"{country} rural heat"
1987
            ] += network.loads_t.p_set[f"{country} {carrier}"]
1988
        else:
1989
            print(
1990
                f"""No time series available for {country} {carrier}.
1991
                  Using static p_set."""
1992
            )
1993
1994
            network.loads_t.p_set[
1995
                f"{country} rural heat"
1996
            ] += network.loads.loc[f"{country} {carrier}", "p_set"]
1997
1998
    # In some cases low-temperature heat for industry is connected to the urban
1999
    # decentral heat bus since there is no urban central heat bus.
2000
    # These loads are connected to the representatiive rural heat bus:
2001
    network.loads.loc[
2002
        (network.loads.bus.str.contains(carrier))
2003
        & (~network.loads.carrier.str.contains(carrier.replace(" heat", ""))),
2004
        "bus",
2005
    ] = network.loads.loc[
2006
        (network.loads.bus.str.contains(carrier))
2007
        & (~network.loads.carrier.str.contains(carrier.replace(" heat", ""))),
2008
        "bus",
2009
    ].str.replace(
2010
        "urban decentral", "rural"
2011
    )
2012
2013
    # Drop componentents attached to urban decentral heat
2014
    for c in network.iterate_components():
2015
        network.mremove(
2016
            c.name, c.df[c.df.index.str.contains("urban decentral")].index
2017
        )
2018
2019
    return network
2020
2021
2022
def district_heating_shares(network):
2023
    df = pd.read_csv(
2024
        "data_bundle_powerd_data/district_heating_shares_egon.csv"
2025
    ).set_index("country_code")
2026
2027
    heat_demand_per_country = (
2028
        network.loads_t.p_set[
2029
            network.loads[
2030
                (network.loads.carrier.str.contains("heat"))
2031
                & network.loads.index.isin(network.loads_t.p_set.columns)
2032
            ].index
2033
        ]
2034
        .groupby(network.loads.bus.str[:5], axis=1)
2035
        .sum()
2036
    )
2037
2038
    for country in heat_demand_per_country.columns:
2039
        network.loads_t.p_set[f"{country} urban central heat"] = (
2040
            heat_demand_per_country.loc[:, country].mul(
2041
                df.loc[country[:2]].values[0]
2042
            )
2043
        )
2044
        network.loads_t.p_set[f"{country} rural heat"] = (
2045
            heat_demand_per_country.loc[:, country].mul(
2046
                (1 - df.loc[country[:2]].values[0])
2047
            )
2048
        )
2049
2050
    # Drop links with undefined buses or carrier
2051
    network.mremove(
2052
        "Link",
2053
        network.links[
2054
            ~network.links.bus0.isin(network.buses.index.values)
2055
        ].index,
2056
    )
2057
    network.mremove(
2058
        "Link",
2059
        network.links[network.links.carrier == ""].index,
2060
    )
2061
2062
    return network
2063
2064
2065
def drop_new_gas_pipelines(network):
2066
    network.mremove(
2067
        "Link",
2068
        network.links[
2069
            network.links.index.str.contains("gas pipeline new")
2070
        ].index,
2071
    )
2072
2073
    return network
2074
2075
2076
def drop_fossil_gas(network):
2077
    network.mremove(
2078
        "Generator",
2079
        network.generators[network.generators.carrier == "gas"].index,
2080
    )
2081
2082
    return network
2083
2084
2085
def drop_conventional_power_plants(network):
2086
2087
    # Drop lignite and coal power plants in Germany
2088
    network.mremove(
2089
        "Link",
2090
        network.links[
2091
            (network.links.carrier.isin(["coal", "lignite"]))
2092
            & (network.links.bus1.str.startswith("DE"))
2093
        ].index,
2094
    )
2095
2096
    return network
2097
2098
2099
def rual_heat_technologies(network):
2100
    network.mremove(
2101
        "Link",
2102
        network.links[
2103
            network.links.index.str.contains("rural gas boiler")
2104
        ].index,
2105
    )
2106
2107
    network.mremove(
2108
        "Generator",
2109
        network.generators[
2110
            network.generators.carrier.str.contains("rural solar thermal")
2111
        ].index,
2112
    )
2113
2114
    return network
2115
2116
2117
def coal_exit_D():
2118
2119
    df = pd.read_csv(
2120
        "run-pypsa-eur/pypsa-eur/resources/powerplants_s_39.csv", index_col=0
2121
    )
2122
    df_de_coal = df[
2123
        (df.Country == "DE")
2124
        & ((df.Fueltype == "Lignite") | (df.Fueltype == "Hard Coal"))
2125
    ]
2126
    df_de_coal.loc[df_de_coal.DateOut.values >= 2035, "DateOut"] = 2034
2127
    df.loc[df_de_coal.index] = df_de_coal
2128
2129
    df.to_csv("run-pypsa-eur/pypsa-eur/resources/powerplants_s_39.csv")
2130
2131
2132
def offwind_potential_D(network, capacity_per_sqkm=4):
2133
2134
    offwind_ac_factor = 1942
2135
    offwind_dc_factor = 10768
2136
    offwind_float_factor = 134
2137
2138
    # set p_nom_max for German offshore with respect to capacity_per_sqkm = 4 instead of default 2 (which is applied for the rest of Europe)
2139
    network.generators.loc[
2140
        (network.generators.bus == "DE0 0")
2141
        & (network.generators.carrier == "offwind-ac"),
2142
        "p_nom_max",
2143
    ] = (
2144
        offwind_ac_factor * capacity_per_sqkm
2145
    )
2146
    network.generators.loc[
2147
        (network.generators.bus == "DE0 0")
2148
        & (network.generators.carrier == "offwind-dc"),
2149
        "p_nom_max",
2150
    ] = (
2151
        offwind_dc_factor * capacity_per_sqkm
2152
    )
2153
    network.generators.loc[
2154
        (network.generators.bus == "DE0 0")
2155
        & (network.generators.carrier == "offwind-float"),
2156
        "p_nom_max",
2157
    ] = (
2158
        offwind_float_factor * capacity_per_sqkm
2159
    )
2160
2161
    return network
2162
2163
2164
def additional_grid_expansion_2045(network):
2165
2166
    network.global_constraints.loc["lc_limit", "constant"] *= 1.05
2167
2168
    return network
2169
2170
2171
def execute():
2172
    if egon.data.config.settings()["egon-data"]["--run-pypsa-eur"]:
2173
        with open(
2174
            __path__[0] + "/datasets/pypsaeur/config.yaml", "r"
2175
        ) as stream:
2176
            data_config = yaml.safe_load(stream)
2177
2178
        if data_config["foresight"] == "myopic":
2179
2180
            print("Adjusting scenarios on the myopic pathway...")
2181
2182
            coal_exit_D()
2183
2184
            networks = pd.Series()
2185
2186
            for i in range(
2187
                0, len(data_config["scenario"]["planning_horizons"])
2188
            ):
2189
                nc_file = pd.Series(
2190
                    f"base_s_{data_config['scenario']['clusters'][0]}"
2191
                    f"_l{data_config['scenario']['ll'][0]}"
2192
                    f"_{data_config['scenario']['opts'][0]}"
2193
                    f"_{data_config['scenario']['sector_opts'][0]}"
2194
                    f"_{data_config['scenario']['planning_horizons'][i]}.nc"
2195
                )
2196
                networks = networks._append(nc_file)
2197
2198
            scn_path = pd.DataFrame(
2199
                index=["2025", "2030", "2035", "2045"],
2200
                columns=["prenetwork", "functions"],
2201
            )
2202
2203
            for year in scn_path.index:
2204
                scn_path.at[year, "prenetwork"] = networks[
2205
                    networks.str.contains(year)
2206
                ].values
2207
2208
            for year in ["2025", "2030", "2035"]:
2209
                scn_path.loc[year, "functions"] = [
2210
                    # drop_urban_decentral_heat,
2211
                    update_electrical_timeseries_germany,
2212
                    geothermal_district_heating,
2213
                    h2_overground_stores,
2214
                    drop_new_gas_pipelines,
2215
                    offwind_potential_D,
2216
                ]
2217
2218
            scn_path.loc["2045", "functions"] = [
2219
                drop_biomass,
2220
                # drop_urban_decentral_heat,
2221
                update_electrical_timeseries_germany,
2222
                geothermal_district_heating,
2223
                h2_overground_stores,
2224
                drop_new_gas_pipelines,
2225
                drop_fossil_gas,
2226
                offwind_potential_D,
2227
                additional_grid_expansion_2045,
2228
                # drop_conventional_power_plants,
2229
                # rual_heat_technologies, #To be defined
2230
            ]
2231
2232
            network_path = (
2233
                Path(".")
2234
                / "run-pypsa-eur"
2235
                / "pypsa-eur"
2236
                / "results"
2237
                / data_config["run"]["name"]
2238
                / "prenetworks"
2239
            )
2240
2241
            for scn in scn_path.index:
2242
                path = network_path / scn_path.at[scn, "prenetwork"]
2243
                network = pypsa.Network(path)
2244
                network.year = int(scn)
2245
                for manipulator in scn_path.at[scn, "functions"]:
2246
                    network = manipulator(network)
2247
                network.export_to_netcdf(path)
2248
2249
        elif (data_config["foresight"] == "overnight") & (
2250
            int(data_config["scenario"]["planning_horizons"][0]) > 2040
2251
        ):
2252
2253
            print("Adjusting overnight long-term scenario...")
2254
2255
            network_path = (
2256
                Path(".")
2257
                / "run-pypsa-eur"
2258
                / "pypsa-eur"
2259
                / "results"
2260
                / data_config["run"]["name"]
2261
                / "prenetworks"
2262
                / f"elec_s_{data_config['scenario']['clusters'][0]}"
2263
                f"_l{data_config['scenario']['ll'][0]}"
2264
                f"_{data_config['scenario']['opts'][0]}"
2265
                f"_{data_config['scenario']['sector_opts'][0]}"
2266
                f"_{data_config['scenario']['planning_horizons'][0]}.nc"
2267
            )
2268
2269
            network = pypsa.Network(network_path)
2270
2271
            network = drop_biomass(network)
2272
2273
            network = drop_urban_decentral_heat(network)
2274
2275
            network = district_heating_shares(network)
2276
2277
            network = update_heat_timeseries_germany(network)
2278
2279
            network = update_electrical_timeseries_germany(network)
2280
2281
            network = geothermal_district_heating(network)
2282
2283
            network = h2_overground_stores(network)
2284
2285
            network = drop_new_gas_pipelines(network)
2286
2287
            network = drop_fossil_gas(network)
2288
2289
            network = rual_heat_technologies(network)
2290
2291
            network.export_to_netcdf(network_path)
2292
2293
        else:
2294
            print(
2295
                f"""Adjustments on prenetworks are not implemented for
2296
                foresight option {data_config['foresight']} and
2297
                year int(data_config['scenario']['planning_horizons'][0].
2298
                Please check the pypsaeur.execute function.
2299
                """
2300
            )
2301
    else:
2302
        print("Pypsa-eur is not executed due to the settings of egon-data")
2303