Completed
Push — dev ( 8582b4...82307e )
by
unknown
30s queued 19s
created

data.datasets.pypsaeur   F

Complexity

Total Complexity 127

Size/Duplication

Total Lines 2308
Duplicated Lines 2.9 %

Importance

Changes 0
Metric Value
wmc 127
eloc 1420
dl 67
loc 2308
rs 0.8
c 0
b 0
f 0

2 Methods

Rating   Name   Duplication   Size   Complexity  
A PreparePypsaEur.__init__() 0 8 1
A RunPypsaEur.__init__() 0 11 1

27 Functions

Rating   Name   Duplication   Size   Complexity  
A solve_network() 0 38 2
A prepare_network_2() 0 27 2
F download() 0 188 12
A prepare_network() 0 45 4
D combine_decentral_and_rural_heat() 0 74 12
A read_network() 33 33 3
A clean_database() 0 83 3
A electrical_neighbours_egon100() 0 7 2
A coal_exit_D() 0 13 1
A h2_overground_stores() 0 43 1
B update_electrical_timeseries_germany() 0 89 3
A overwrite_H2_pipeline_share() 0 43 1
A drop_fossil_gas() 0 7 1
A drop_biomass() 0 6 2
A prepared_network() 34 34 3
A rual_heat_technologies() 0 16 1
D execute() 0 132 10
A drop_conventional_power_plants() 0 12 1
A additional_grid_expansion_2045() 0 5 1
A drop_urban_decentral_heat() 0 44 4
A offwind_potential_D() 0 30 1
F neighbor_reduction() 0 1081 47
A drop_new_gas_pipelines() 0 9 1
A update_heat_timeseries_germany() 0 18 1
A district_heating_shares() 0 41 2
A geothermal_district_heating() 0 40 2
A postprocessing_biomass_2045() 0 25 2

How to fix   Duplicated Code    Complexity   

Duplicated Code

Duplicate code is one of the most pungent code smells. A rule that is often used is to re-structure code once it is duplicated in three or more places.

Common duplication problems, and corresponding solutions are:

Complexity

 Tip:   Before tackling complexity, make sure that you eliminate any duplication first. This often can reduce the size of classes significantly.

Complex classes like data.datasets.pypsaeur often do a lot of different things. To break such a class down, we need to identify a cohesive component within that class. A common approach to find such a component is to look for fields/methods that share the same prefixes, or suffixes.

Once you have determined the fields that belong together, you can apply the Extract Class refactoring. If the component makes sense as a sub-class, Extract Subclass is also a candidate, and is often faster.

1
"""The central module containing all code dealing with importing data from
2
the pysa-eur-sec scenario parameter creation
3
"""
4
5
from pathlib import Path
6
from urllib.request import urlretrieve
7
import json
8
import shutil
9
10
from shapely.geometry import LineString
11
import geopandas as gpd
12
import importlib_resources as resources
13
import numpy as np
14
import pandas as pd
15
import pypsa
16
import requests
17
import yaml
18
19
from egon.data import __path__, config, db, logger
20
from egon.data.datasets import Dataset
21
from egon.data.datasets.scenario_parameters import get_sector_parameters
22
from egon.data.datasets.scenario_parameters.parameters import (
23
    annualize_capital_costs,
24
)
25
import egon.data.config
26
import egon.data.subprocess as subproc
27
28
29
class PreparePypsaEur(Dataset):
30
    def __init__(self, dependencies):
31
        super().__init__(
32
            name="PreparePypsaEur",
33
            version="0.0.42",
34
            dependencies=dependencies,
35
            tasks=(
36
                download,
37
                prepare_network,
38
            ),
39
        )
40
41
42
class RunPypsaEur(Dataset):
43
    def __init__(self, dependencies):
44
        super().__init__(
45
            name="SolvePypsaEur",
46
            version="0.0.41",
47
            dependencies=dependencies,
48
            tasks=(
49
                prepare_network_2,
50
                execute,
51
                solve_network,
52
                clean_database,
53
                electrical_neighbours_egon100,
54
                # Dropped until we decided how we deal with the H2 grid
55
                # overwrite_H2_pipeline_share,
56
            ),
57
        )
58
59
60
def download():
61
    cwd = Path(".")
62
    filepath = cwd / "run-pypsa-eur"
63
    filepath.mkdir(parents=True, exist_ok=True)
64
65
    pypsa_eur_repos = filepath / "pypsa-eur"
66
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
67
        if not pypsa_eur_repos.exists():
68
            subproc.run(
69
                [
70
                    "git",
71
                    "clone",
72
                    "--branch",
73
                    "master",
74
                    "https://github.com/PyPSA/pypsa-eur.git",
75
                    pypsa_eur_repos,
76
                ]
77
            )
78
79
            subproc.run(
80
                [
81
                    "git",
82
                    "checkout",
83
                    "2119f4cee05c256509f48d4e9fe0d8fd9e9e3632",
84
                ],
85
                cwd=pypsa_eur_repos,
86
            )
87
88
            # Add gurobi solver to environment:
89
            # Read YAML file
90
            # path_to_env = pypsa_eur_repos / "envs" / "environment.yaml"
91
            # with open(path_to_env, "r") as stream:
92
            #    env = yaml.safe_load(stream)
93
94
            # The version of gurobipy has to fit to the version of gurobi.
95
            # Since we mainly use gurobi 10.0 this is set here.
96
            # env["dependencies"][-1]["pip"].append("gurobipy==10.0.0")
97
98
            # Set python version to <3.12
99
            # Python<=3.12 needs gurobipy>=11.0, in case gurobipy is updated,
100
            # this can be removed
101
            # env["dependencies"] = [
102
            #    "python>=3.8,<3.12" if x == "python>=3.8" else x
103
            #    for x in env["dependencies"]
104
            # ]
105
106
            # Limit geopandas version
107
            # our pypsa-eur version is not compatible to geopandas>1
108
            # env["dependencies"] = [
109
            #    "geopandas>=0.11.0,<1" if x == "geopandas>=0.11.0" else x
110
            #    for x in env["dependencies"]
111
            # ]
112
113
            # Write YAML file
114
            # with open(path_to_env, "w", encoding="utf8") as outfile:
115
            #    yaml.dump(
116
            #        env, outfile, default_flow_style=False, allow_unicode=True
117
            #    )
118
119
            # Copy config file for egon-data to pypsa-eur directory
120
            shutil.copy(
121
                Path(
122
                    __path__[0], "datasets", "pypsaeur", "config_prepare.yaml"
123
                ),
124
                pypsa_eur_repos / "config" / "config.yaml",
125
            )
126
127
            # Copy custom_extra_functionality.py file for egon-data to pypsa-eur directory
128
            shutil.copy(
129
                Path(
130
                    __path__[0],
131
                    "datasets",
132
                    "pypsaeur",
133
                    "custom_extra_functionality.py",
134
                ),
135
                pypsa_eur_repos / "data",
136
            )
137
138
            with open(filepath / "Snakefile", "w") as snakefile:
139
                snakefile.write(
140
                    resources.read_text(
141
                        "egon.data.datasets.pypsaeur", "Snakefile"
142
                    )
143
                )
144
145
        # Copy era5 weather data to folder for pypsaeur
146
        era5_pypsaeur_path = filepath / "pypsa-eur" / "cutouts"
147
148
        if not era5_pypsaeur_path.exists():
149
            era5_pypsaeur_path.mkdir(parents=True, exist_ok=True)
150
            copy_from = config.datasets()["era5_weather_data"]["targets"][
151
                "weather_data"
152
            ]["path"]
153
            filename = "europe-2011-era5.nc"
154
            shutil.copy(
155
                copy_from + "/" + filename, era5_pypsaeur_path / filename
156
            )
157
158
        # Workaround to download natura, shipdensity and globalenergymonitor
159
        # data, which is not working in the regular snakemake workflow.
160
        # The same files are downloaded from the same directory as in pypsa-eur
161
        # version 0.10 here. Is is stored in the folders from pypsa-eur.
162
        if not (filepath / "pypsa-eur" / "resources").exists():
163
            (filepath / "pypsa-eur" / "resources").mkdir(
164
                parents=True, exist_ok=True
165
            )
166
        urlretrieve(
167
            "https://zenodo.org/record/4706686/files/natura.tiff",
168
            filepath / "pypsa-eur" / "resources" / "natura.tiff",
169
        )
170
171
        if not (filepath / "pypsa-eur" / "data").exists():
172
            (filepath / "pypsa-eur" / "data").mkdir(
173
                parents=True, exist_ok=True
174
            )
175
        urlretrieve(
176
            "https://zenodo.org/record/13757228/files/shipdensity_global.zip",
177
            filepath / "pypsa-eur" / "data" / "shipdensity_global.zip",
178
        )
179
180
        if not (
181
            filepath
182
            / "pypsa-eur"
183
            / "zenodo.org"
184
            / "records"
185
            / "13757228"
186
            / "files"
187
        ).exists():
188
            (
189
                filepath
190
                / "pypsa-eur"
191
                / "zenodo.org"
192
                / "records"
193
                / "13757228"
194
                / "files"
195
            ).mkdir(parents=True, exist_ok=True)
196
197
        urlretrieve(
198
            "https://zenodo.org/records/10356004/files/ENSPRESO_BIOMASS.xlsx",
199
            filepath
200
            / "pypsa-eur"
201
            / "zenodo.org"
202
            / "records"
203
            / "13757228"
204
            / "files"
205
            / "ENSPRESO_BIOMASS.xlsx",
206
        )
207
208
        if not (filepath / "pypsa-eur" / "data" / "gem").exists():
209
            (filepath / "pypsa-eur" / "data" / "gem").mkdir(
210
                parents=True, exist_ok=True
211
            )
212
213
        r = requests.get(
214
            "https://tubcloud.tu-berlin.de/s/LMBJQCsN6Ez5cN2/download/"
215
            "Europe-Gas-Tracker-2024-05.xlsx"
216
        )
217
        with open(
218
            filepath
219
            / "pypsa-eur"
220
            / "data"
221
            / "gem"
222
            / "Europe-Gas-Tracker-2024-05.xlsx",
223
            "wb",
224
        ) as outfile:
225
            outfile.write(r.content)
226
227
        if not (filepath / "pypsa-eur" / "data" / "gem").exists():
228
            (filepath / "pypsa-eur" / "data" / "gem").mkdir(
229
                parents=True, exist_ok=True
230
            )
231
232
        r = requests.get(
233
            "https://tubcloud.tu-berlin.de/s/Aqebo3rrQZWKGsG/download/"
234
            "Global-Steel-Plant-Tracker-April-2024-Standard-Copy-V1.xlsx"
235
        )
236
        with open(
237
            filepath
238
            / "pypsa-eur"
239
            / "data"
240
            / "gem"
241
            / "Global-Steel-Plant-Tracker-April-2024-Standard-Copy-V1.xlsx",
242
            "wb",
243
        ) as outfile:
244
            outfile.write(r.content)
245
246
    else:
247
        print("Pypsa-eur is not executed due to the settings of egon-data")
248
249
250
def prepare_network():
251
    cwd = Path(".")
252
    filepath = cwd / "run-pypsa-eur"
253
254
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
255
        subproc.run(
256
            [
257
                "snakemake",
258
                "-j1",
259
                "--directory",
260
                filepath,
261
                "--snakefile",
262
                filepath / "Snakefile",
263
                "--use-conda",
264
                "--conda-frontend=conda",
265
                "--cores",
266
                "8",
267
                "prepare",
268
            ]
269
        )
270
        execute()
271
272
        path = filepath / "pypsa-eur" / "results" / "prenetworks"
273
274
        path_2 = path / "prenetwork_post-manipulate_pre-solve"
275
        path_2.mkdir(parents=True, exist_ok=True)
276
277
        with open(
278
            __path__[0] + "/datasets/pypsaeur/config_prepare.yaml", "r"
279
        ) as stream:
280
            data_config = yaml.safe_load(stream)
281
282
        for i in range(0, len(data_config["scenario"]["planning_horizons"])):
283
            nc_file = (
284
                f"base_s_{data_config['scenario']['clusters'][0]}"
285
                f"_l{data_config['scenario']['ll'][0]}"
286
                f"_{data_config['scenario']['opts'][0]}"
287
                f"_{data_config['scenario']['sector_opts'][0]}"
288
                f"_{data_config['scenario']['planning_horizons'][i]}.nc"
289
            )
290
291
            shutil.copy(Path(path, nc_file), path_2)
292
293
    else:
294
        print("Pypsa-eur is not executed due to the settings of egon-data")
295
296
297
def prepare_network_2():
298
    cwd = Path(".")
299
    filepath = cwd / "run-pypsa-eur"
300
301
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
302
        shutil.copy(
303
            Path(__path__[0], "datasets", "pypsaeur", "config_solve.yaml"),
304
            filepath / "pypsa-eur" / "config" / "config.yaml",
305
        )
306
307
        subproc.run(
308
            [
309
                "snakemake",
310
                "-j1",
311
                "--directory",
312
                filepath,
313
                "--snakefile",
314
                filepath / "Snakefile",
315
                "--use-conda",
316
                "--conda-frontend=conda",
317
                "--cores",
318
                "8",
319
                "prepare",
320
            ]
321
        )
322
    else:
323
        print("Pypsa-eur is not executed due to the settings of egon-data")
324
325
326
def solve_network():
327
    cwd = Path(".")
328
    filepath = cwd / "run-pypsa-eur"
329
330
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
331
        subproc.run(
332
            [
333
                "snakemake",
334
                "-j1",
335
                "--cores",
336
                "8",
337
                "--directory",
338
                filepath,
339
                "--snakefile",
340
                filepath / "Snakefile",
341
                "--use-conda",
342
                "--conda-frontend=conda",
343
                "solve",
344
            ]
345
        )
346
347
        postprocessing_biomass_2045()
348
349
        subproc.run(
350
            [
351
                "snakemake",
352
                "-j1",
353
                "--directory",
354
                filepath,
355
                "--snakefile",
356
                filepath / "Snakefile",
357
                "--use-conda",
358
                "--conda-frontend=conda",
359
                "summary",
360
            ]
361
        )
362
    else:
363
        print("Pypsa-eur is not executed due to the settings of egon-data")
364
365
366 View Code Duplication
def read_network(planning_horizon=3):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
367
    if config.settings()["egon-data"]["--run-pypsa-eur"]:
368
        with open(
369
            __path__[0] + "/datasets/pypsaeur/config_solve.yaml", "r"
370
        ) as stream:
371
            data_config = yaml.safe_load(stream)
372
373
        target_file = (
374
            Path(".")
375
            / "run-pypsa-eur"
376
            / "pypsa-eur"
377
            / "results"
378
            / data_config["run"]["name"]
379
            / "postnetworks"
380
            / f"base_s_{data_config['scenario']['clusters'][0]}"
381
            f"_l{data_config['scenario']['ll'][0]}"
382
            f"_{data_config['scenario']['opts'][0]}"
383
            f"_{data_config['scenario']['sector_opts'][0]}"
384
            f"_{data_config['scenario']['planning_horizons'][planning_horizon]}.nc"
385
        )
386
387
    else:
388
        target_file = (
389
            Path(".")
390
            / "data_bundle_powerd_data"
391
            / "pypsa_eur"
392
            / "21122024_3h_clean_run"
393
            / "results"
394
            / "postnetworks"
395
            / "base_s_39_lc1.25__cb40ex0-T-H-I-B-solar+p3-dist1_2045.nc"
396
        )
397
398
    return pypsa.Network(target_file)
399
400
401
def clean_database():
402
    """Remove all components abroad for eGon100RE of the database
403
404
    Remove all components abroad and their associated time series of
405
    the datase for the scenario 'eGon100RE'.
406
407
    Parameters
408
    ----------
409
    None
410
411
    Returns
412
    -------
413
    None
414
415
    """
416
    scn_name = "eGon100RE"
417
418
    comp_one_port = ["load", "generator", "store", "storage"]
419
420
    # delete existing components and associated timeseries
421
    for comp in comp_one_port:
422
        db.execute_sql(
423
            f"""
424
            DELETE FROM {"grid.egon_etrago_" + comp + "_timeseries"}
425
            WHERE {comp + "_id"} IN (
426
                SELECT {comp + "_id"} FROM {"grid.egon_etrago_" + comp}
427
                WHERE bus IN (
428
                    SELECT bus_id FROM grid.egon_etrago_bus
429
                    WHERE country != 'DE'
430
                    AND scn_name = '{scn_name}')
431
                AND scn_name = '{scn_name}'
432
            );
433
434
            DELETE FROM {"grid.egon_etrago_" + comp}
435
            WHERE bus IN (
436
                SELECT bus_id FROM grid.egon_etrago_bus
437
                WHERE country != 'DE'
438
                AND scn_name = '{scn_name}')
439
            AND scn_name = '{scn_name}';"""
440
        )
441
442
    comp_2_ports = [
443
        "line",
444
        "link",
445
    ]
446
447
    for comp, id in zip(comp_2_ports, ["line_id", "link_id"]):
448
        db.execute_sql(
449
            f"""
450
            DELETE FROM {"grid.egon_etrago_" + comp + "_timeseries"}
451
            WHERE scn_name = '{scn_name}'
452
            AND {id} IN (
453
                SELECT {id} FROM {"grid.egon_etrago_" + comp}
454
            WHERE "bus0" IN (
455
            SELECT bus_id FROM grid.egon_etrago_bus
456
                WHERE country != 'DE'
457
                AND scn_name = '{scn_name}'
458
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
459
            AND "bus1" IN (
460
            SELECT bus_id FROM grid.egon_etrago_bus
461
                WHERE country != 'DE'
462
                AND scn_name = '{scn_name}'
463
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
464
            );
465
466
467
            DELETE FROM {"grid.egon_etrago_" + comp}
468
            WHERE scn_name = '{scn_name}'
469
            AND "bus0" IN (
470
            SELECT bus_id FROM grid.egon_etrago_bus
471
                WHERE country != 'DE'
472
                AND scn_name = '{scn_name}'
473
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
474
            AND "bus1" IN (
475
            SELECT bus_id FROM grid.egon_etrago_bus
476
                WHERE country != 'DE'
477
                AND scn_name = '{scn_name}'
478
                AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data))
479
            ;"""
480
        )
481
482
    db.execute_sql(
483
        f"""
484
        DELETE FROM grid.egon_etrago_bus
485
        WHERE scn_name = '{scn_name}'
486
        AND country <> 'DE'
487
        AND carrier <> 'AC'
488
        """
489
    )
490
491
492
def electrical_neighbours_egon100():
493
    if "eGon100RE" in egon.data.config.settings()["egon-data"]["--scenarios"]:
494
        neighbor_reduction()
495
496
    else:
497
        print(
498
            "eGon100RE is not in the list of created scenarios, this task is skipped."
499
        )
500
501
502
def combine_decentral_and_rural_heat(network_solved, network_prepared):
503
504
    for comp in network_solved.iterate_components():
505
506
        if comp.name in ["Bus", "Link", "Store"]:
507
            urban_decentral = comp.df[
508
                comp.df.carrier.str.contains("urban decentral")
509
            ]
510
            rural = comp.df[comp.df.carrier.str.contains("rural")]
511
            for i, row in urban_decentral.iterrows():
512
                if not "DE" in i:
513
                    if comp.name in ["Bus"]:
514
                        network_solved.remove("Bus", i)
515
                    if comp.name in ["Link", "Generator"]:
516
                        if (
517
                            i.replace("urban decentral", "rural")
518
                            in rural.index
519
                        ):
520
                            rural.loc[
521
                                i.replace("urban decentral", "rural"),
522
                                "p_nom_opt",
523
                            ] += urban_decentral.loc[i, "p_nom_opt"]
524
                            rural.loc[
525
                                i.replace("urban decentral", "rural"), "p_nom"
526
                            ] += urban_decentral.loc[i, "p_nom"]
527
                            network_solved.remove(comp.name, i)
528
                        else:
529
                            print(i)
530
                            comp.df.loc[i, "bus0"] = comp.df.loc[
531
                                i, "bus0"
532
                            ].replace("urban decentral", "rural")
533
                            comp.df.loc[i, "bus1"] = comp.df.loc[
534
                                i, "bus1"
535
                            ].replace("urban decentral", "rural")
536
                            comp.df.loc[i, "carrier"] = comp.df.loc[
537
                                i, "carrier"
538
                            ].replace("urban decentral", "rural")
539
                    if comp.name in ["Store"]:
540
                        if (
541
                            i.replace("urban decentral", "rural")
542
                            in rural.index
543
                        ):
544
                            rural.loc[
545
                                i.replace("urban decentral", "rural"),
546
                                "e_nom_opt",
547
                            ] += urban_decentral.loc[i, "e_nom_opt"]
548
                            rural.loc[
549
                                i.replace("urban decentral", "rural"), "e_nom"
550
                            ] += urban_decentral.loc[i, "e_nom"]
551
                            network_solved.remove(comp.name, i)
552
553
                        else:
554
                            print(i)
555
                            network_solved.stores.loc[i, "bus"] = (
556
                                network_solved.stores.loc[i, "bus"].replace(
557
                                    "urban decentral", "rural"
558
                                )
559
                            )
560
                            network_solved.stores.loc[i, "carrier"] = (
561
                                "rural water tanks"
562
                            )
563
564
    urban_decentral_loads = network_prepared.loads[
565
        network_prepared.loads.carrier.str.contains("urban decentral")
566
    ]
567
568
    for i, row in urban_decentral_loads.iterrows():
569
        if i in network_prepared.loads_t.p_set.columns:
570
            network_prepared.loads_t.p_set[
571
                i.replace("urban decentral", "rural")
572
            ] += network_prepared.loads_t.p_set[i]
573
    network_prepared.mremove("Load", urban_decentral_loads.index)
574
575
    return network_prepared, network_solved
576
577
578
def neighbor_reduction():
579
    network_solved = read_network()
580
    network_prepared = prepared_network(planning_horizon="2045")
581
582
    # network.links.drop("pipe_retrofit", axis="columns", inplace=True)
583
584
    wanted_countries = [
585
        "DE",
586
        "AT",
587
        "CH",
588
        "CZ",
589
        "PL",
590
        "SE",
591
        "NO",
592
        "DK",
593
        "GB",
594
        "NL",
595
        "BE",
596
        "FR",
597
        "LU",
598
    ]
599
600
    foreign_buses = network_solved.buses[
601
        (~network_solved.buses.index.str.contains("|".join(wanted_countries)))
602
        | (network_solved.buses.index.str.contains("FR6"))
603
    ]
604
    network_solved.buses = network_solved.buses.drop(
605
        network_solved.buses.loc[foreign_buses.index].index
606
    )
607
608
    # Add H2 demand of Fischer-Tropsch process and methanolisation
609
    # to industrial H2 demands
610
    industrial_hydrogen = network_prepared.loads.loc[
611
        network_prepared.loads.carrier == "H2 for industry"
612
    ]
613
    fischer_tropsch = (
614
        network_solved.links_t.p0[
615
            network_solved.links.loc[
616
                network_solved.links.carrier == "Fischer-Tropsch"
617
            ].index
618
        ]
619
        .mul(network_solved.snapshot_weightings.generators, axis=0)
620
        .sum()
621
    )
622
    methanolisation = (
623
        network_solved.links_t.p0[
624
            network_solved.links.loc[
625
                network_solved.links.carrier == "methanolisation"
626
            ].index
627
        ]
628
        .mul(network_solved.snapshot_weightings.generators, axis=0)
629
        .sum()
630
    )
631
    for i, row in industrial_hydrogen.iterrows():
632
        network_prepared.loads.loc[i, "p_set"] += (
633
            fischer_tropsch[
634
                fischer_tropsch.index.str.startswith(row.bus[:5])
635
            ].sum()
636
            / 8760
637
        )
638
        network_prepared.loads.loc[i, "p_set"] += (
639
            methanolisation[
640
                methanolisation.index.str.startswith(row.bus[:5])
641
            ].sum()
642
            / 8760
643
        )
644
    # drop foreign lines and links from the 2nd row
645
646
    network_solved.lines = network_solved.lines.drop(
647
        network_solved.lines[
648
            (
649
                network_solved.lines["bus0"].isin(network_solved.buses.index)
650
                == False
651
            )
652
            & (
653
                network_solved.lines["bus1"].isin(network_solved.buses.index)
654
                == False
655
            )
656
        ].index
657
    )
658
659
    # select all lines which have at bus1 the bus which is kept
660
    lines_cb_1 = network_solved.lines[
661
        (
662
            network_solved.lines["bus0"].isin(network_solved.buses.index)
663
            == False
664
        )
665
    ]
666
667
    # create a load at bus1 with the line's hourly loading
668
    for i, k in zip(lines_cb_1.bus1.values, lines_cb_1.index):
669
670
        # Copy loading of lines into hourly resolution
671
        pset = pd.Series(
672
            index=network_prepared.snapshots,
673
            data=network_solved.lines_t.p1[k].resample("H").ffill(),
674
        )
675
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
676
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
677
678
        # Loads are all imported from the prepared network in the end
679
        network_prepared.add(
680
            "Load",
681
            "slack_fix " + i + " " + k,
682
            bus=i,
683
            p_set=pset,
684
            carrier=lines_cb_1.loc[k, "carrier"],
685
        )
686
687
    # select all lines which have at bus0 the bus which is kept
688
    lines_cb_0 = network_solved.lines[
689
        (
690
            network_solved.lines["bus1"].isin(network_solved.buses.index)
691
            == False
692
        )
693
    ]
694
695
    # create a load at bus0 with the line's hourly loading
696
    for i, k in zip(lines_cb_0.bus0.values, lines_cb_0.index):
697
        # Copy loading of lines into hourly resolution
698
        pset = pd.Series(
699
            index=network_prepared.snapshots,
700
            data=network_solved.lines_t.p0[k].resample("H").ffill(),
701
        )
702
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
703
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
704
705
        network_prepared.add(
706
            "Load",
707
            "slack_fix " + i + " " + k,
708
            bus=i,
709
            p_set=pset,
710
            carrier=lines_cb_0.loc[k, "carrier"],
711
        )
712
713
    # do the same for links
714
    network_solved.mremove(
715
        "Link",
716
        network_solved.links[
717
            (~network_solved.links.bus0.isin(network_solved.buses.index))
718
            | (~network_solved.links.bus1.isin(network_solved.buses.index))
719
        ].index,
720
    )
721
722
    # select all links which have at bus1 the bus which is kept
723
    links_cb_1 = network_solved.links[
724
        (
725
            network_solved.links["bus0"].isin(network_solved.buses.index)
726
            == False
727
        )
728
    ]
729
730
    # create a load at bus1 with the link's hourly loading
731
    for i, k in zip(links_cb_1.bus1.values, links_cb_1.index):
732
        pset = pd.Series(
733
            index=network_prepared.snapshots,
734
            data=network_solved.links_t.p1[k].resample("H").ffill(),
735
        )
736
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
737
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
738
739
        network_prepared.add(
740
            "Load",
741
            "slack_fix_links " + i + " " + k,
742
            bus=i,
743
            p_set=pset,
744
            carrier=links_cb_1.loc[k, "carrier"],
745
        )
746
747
    # select all links which have at bus0 the bus which is kept
748
    links_cb_0 = network_solved.links[
749
        (
750
            network_solved.links["bus1"].isin(network_solved.buses.index)
751
            == False
752
        )
753
    ]
754
755
    # create a load at bus0 with the link's hourly loading
756
    for i, k in zip(links_cb_0.bus0.values, links_cb_0.index):
757
        pset = pd.Series(
758
            index=network_prepared.snapshots,
759
            data=network_solved.links_t.p0[k].resample("H").ffill(),
760
        )
761
        pset["2011-12-31 22:00:00"] = pset["2011-12-31 21:00:00"]
762
        pset["2011-12-31 23:00:00"] = pset["2011-12-31 21:00:00"]
763
764
        network_prepared.add(
765
            "Load",
766
            "slack_fix_links " + i + " " + k,
767
            bus=i,
768
            p_set=pset,
769
            carrier=links_cb_0.carrier[k],
770
        )
771
772
    # drop remaining foreign components
773
    for comp in network_solved.iterate_components():
774
        if "bus0" in comp.df.columns:
775
            network_solved.mremove(
776
                comp.name,
777
                comp.df[~comp.df.bus0.isin(network_solved.buses.index)].index,
778
            )
779
            network_solved.mremove(
780
                comp.name,
781
                comp.df[~comp.df.bus1.isin(network_solved.buses.index)].index,
782
            )
783
        elif "bus" in comp.df.columns:
784
            network_solved.mremove(
785
                comp.name,
786
                comp.df[~comp.df.bus.isin(network_solved.buses.index)].index,
787
            )
788
789
    # Combine urban decentral and rural heat
790
    network_prepared, network_solved = combine_decentral_and_rural_heat(
791
        network_solved, network_prepared
792
    )
793
794
    # writing components of neighboring countries to etrago tables
795
796
    # Set country tag for all buses
797
    network_solved.buses.country = network_solved.buses.index.str[:2]
798
    neighbors = network_solved.buses[network_solved.buses.country != "DE"]
799
800
    neighbors["new_index"] = (
801
        db.next_etrago_id("bus") + neighbors.reset_index().index
802
    )
803
804
    # Use index of AC buses created by electrical_neigbors
805
    foreign_ac_buses = db.select_dataframe(
806
        """
807
        SELECT * FROM grid.egon_etrago_bus
808
        WHERE carrier = 'AC' AND v_nom = 380
809
        AND country!= 'DE' AND scn_name ='eGon100RE'
810
        AND bus_id NOT IN (SELECT bus_i FROM osmtgmod_results.bus_data)
811
        """
812
    )
813
    buses_with_defined_id = neighbors[
814
        (neighbors.carrier == "AC")
815
        & (neighbors.country.isin(foreign_ac_buses.country.values))
816
    ].index
817
    neighbors.loc[buses_with_defined_id, "new_index"] = (
818
        foreign_ac_buses.set_index("x")
819
        .loc[neighbors.loc[buses_with_defined_id, "x"]]
820
        .bus_id.values
821
    )
822
823
    # lines, the foreign crossborder lines
824
    # (without crossborder lines to Germany!)
825
826
    neighbor_lines = network_solved.lines[
827
        network_solved.lines.bus0.isin(neighbors.index)
828
        & network_solved.lines.bus1.isin(neighbors.index)
829
    ]
830
    if not network_solved.lines_t["s_max_pu"].empty:
831
        neighbor_lines_t = network_prepared.lines_t["s_max_pu"][
832
            neighbor_lines.index
833
        ]
834
835
    neighbor_lines.reset_index(inplace=True)
836
    neighbor_lines.bus0 = (
837
        neighbors.loc[neighbor_lines.bus0, "new_index"].reset_index().new_index
838
    )
839
    neighbor_lines.bus1 = (
840
        neighbors.loc[neighbor_lines.bus1, "new_index"].reset_index().new_index
841
    )
842
    neighbor_lines.index += db.next_etrago_id("line")
843
844
    if not network_solved.lines_t["s_max_pu"].empty:
845
        for i in neighbor_lines_t.columns:
0 ignored issues
show
introduced by
The variable neighbor_lines_t does not seem to be defined in case BooleanNotNode on line 830 is False. Are you sure this can never be the case?
Loading history...
846
            new_index = neighbor_lines[neighbor_lines["name"] == i].index
847
            neighbor_lines_t.rename(columns={i: new_index[0]}, inplace=True)
848
849
    # links
850
    neighbor_links = network_solved.links[
851
        network_solved.links.bus0.isin(neighbors.index)
852
        & network_solved.links.bus1.isin(neighbors.index)
853
    ]
854
855
    neighbor_links.reset_index(inplace=True)
856
    neighbor_links.bus0 = (
857
        neighbors.loc[neighbor_links.bus0, "new_index"].reset_index().new_index
858
    )
859
    neighbor_links.bus1 = (
860
        neighbors.loc[neighbor_links.bus1, "new_index"].reset_index().new_index
861
    )
862
    neighbor_links.index += db.next_etrago_id("link")
863
864
    # generators
865
    neighbor_gens = network_solved.generators[
866
        network_solved.generators.bus.isin(neighbors.index)
867
    ]
868
    neighbor_gens_t = network_prepared.generators_t["p_max_pu"][
869
        neighbor_gens[
870
            neighbor_gens.index.isin(
871
                network_prepared.generators_t["p_max_pu"].columns
872
            )
873
        ].index
874
    ]
875
876
    gen_time = [
877
        "solar",
878
        "onwind",
879
        "solar rooftop",
880
        "offwind-ac",
881
        "offwind-dc",
882
        "solar-hsat",
883
        "urban central solar thermal",
884
        "rural solar thermal",
885
        "offwind-float",
886
    ]
887
888
    missing_gent = neighbor_gens[
889
        neighbor_gens["carrier"].isin(gen_time)
890
        & ~neighbor_gens.index.isin(neighbor_gens_t.columns)
891
    ].index
892
893
    gen_timeseries = network_prepared.generators_t["p_max_pu"].copy()
894
    for mgt in missing_gent:  # mgt: missing generator timeseries
895
        try:
896
            neighbor_gens_t[mgt] = gen_timeseries.loc[:, mgt[0:-5]]
897
        except:
898
            print(f"There are not timeseries for {mgt}")
899
900
    neighbor_gens.reset_index(inplace=True)
901
    neighbor_gens.bus = (
902
        neighbors.loc[neighbor_gens.bus, "new_index"].reset_index().new_index
903
    )
904
    neighbor_gens.index += db.next_etrago_id("generator")
905
906
    for i in neighbor_gens_t.columns:
907
        new_index = neighbor_gens[neighbor_gens["Generator"] == i].index
908
        neighbor_gens_t.rename(columns={i: new_index[0]}, inplace=True)
909
910
    # loads
911
    # imported from prenetwork in 1h-resolution
912
    neighbor_loads = network_prepared.loads[
913
        network_prepared.loads.bus.isin(neighbors.index)
914
    ]
915
    neighbor_loads_t_index = neighbor_loads.index[
916
        neighbor_loads.index.isin(network_prepared.loads_t.p_set.columns)
917
    ]
918
    neighbor_loads_t = network_prepared.loads_t["p_set"][
919
        neighbor_loads_t_index
920
    ]
921
922
    neighbor_loads.reset_index(inplace=True)
923
    neighbor_loads.bus = (
924
        neighbors.loc[neighbor_loads.bus, "new_index"].reset_index().new_index
925
    )
926
    neighbor_loads.index += db.next_etrago_id("load")
927
928
    for i in neighbor_loads_t.columns:
929
        new_index = neighbor_loads[neighbor_loads["Load"] == i].index
930
        neighbor_loads_t.rename(columns={i: new_index[0]}, inplace=True)
931
932
    # stores
933
    neighbor_stores = network_solved.stores[
934
        network_solved.stores.bus.isin(neighbors.index)
935
    ]
936
    neighbor_stores_t_index = neighbor_stores.index[
937
        neighbor_stores.index.isin(network_solved.stores_t.e_min_pu.columns)
938
    ]
939
    neighbor_stores_t = network_prepared.stores_t["e_min_pu"][
940
        neighbor_stores_t_index
941
    ]
942
943
    neighbor_stores.reset_index(inplace=True)
944
    neighbor_stores.bus = (
945
        neighbors.loc[neighbor_stores.bus, "new_index"].reset_index().new_index
946
    )
947
    neighbor_stores.index += db.next_etrago_id("store")
948
949
    for i in neighbor_stores_t.columns:
950
        new_index = neighbor_stores[neighbor_stores["Store"] == i].index
951
        neighbor_stores_t.rename(columns={i: new_index[0]}, inplace=True)
952
953
    # storage_units
954
    neighbor_storage = network_solved.storage_units[
955
        network_solved.storage_units.bus.isin(neighbors.index)
956
    ]
957
    neighbor_storage_t_index = neighbor_storage.index[
958
        neighbor_storage.index.isin(
959
            network_solved.storage_units_t.inflow.columns
960
        )
961
    ]
962
    neighbor_storage_t = network_prepared.storage_units_t["inflow"][
963
        neighbor_storage_t_index
964
    ]
965
966
    neighbor_storage.reset_index(inplace=True)
967
    neighbor_storage.bus = (
968
        neighbors.loc[neighbor_storage.bus, "new_index"]
969
        .reset_index()
970
        .new_index
971
    )
972
    neighbor_storage.index += db.next_etrago_id("storage")
973
974
    for i in neighbor_storage_t.columns:
975
        new_index = neighbor_storage[
976
            neighbor_storage["StorageUnit"] == i
977
        ].index
978
        neighbor_storage_t.rename(columns={i: new_index[0]}, inplace=True)
979
980
    # Connect to local database
981
    engine = db.engine()
982
983
    neighbors["scn_name"] = "eGon100RE"
984
    neighbors.index = neighbors["new_index"]
985
986
    # Correct geometry for non AC buses
987
    carriers = set(neighbors.carrier.to_list())
988
    carriers = [e for e in carriers if e not in ("AC")]
989
    non_AC_neighbors = pd.DataFrame()
990
    for c in carriers:
991
        c_neighbors = neighbors[neighbors.carrier == c].set_index(
992
            "location", drop=False
993
        )
994
        for i in ["x", "y"]:
995
            c_neighbors = c_neighbors.drop(i, axis=1)
996
        coordinates = neighbors[neighbors.carrier == "AC"][
997
            ["location", "x", "y"]
998
        ].set_index("location")
999
        c_neighbors = pd.concat([coordinates, c_neighbors], axis=1).set_index(
1000
            "new_index", drop=False
1001
        )
1002
        non_AC_neighbors = pd.concat([non_AC_neighbors, c_neighbors])
1003
1004
    neighbors = pd.concat(
1005
        [neighbors[neighbors.carrier == "AC"], non_AC_neighbors]
1006
    )
1007
1008
    for i in [
1009
        "new_index",
1010
        "control",
1011
        "generator",
1012
        "location",
1013
        "sub_network",
1014
        "unit",
1015
        "substation_lv",
1016
        "substation_off",
1017
    ]:
1018
        neighbors = neighbors.drop(i, axis=1)
1019
1020
    # Add geometry column
1021
    neighbors = (
1022
        gpd.GeoDataFrame(
1023
            neighbors, geometry=gpd.points_from_xy(neighbors.x, neighbors.y)
1024
        )
1025
        .rename_geometry("geom")
1026
        .set_crs(4326)
1027
    )
1028
1029
    # Unify carrier names
1030
    neighbors.carrier = neighbors.carrier.str.replace(" ", "_")
1031
    neighbors.carrier.replace(
1032
        {
1033
            "gas": "CH4",
1034
            "gas_for_industry": "CH4_for_industry",
1035
            "urban_central_heat": "central_heat",
1036
            "EV_battery": "Li_ion",
1037
            "urban_central_water_tanks": "central_heat_store",
1038
            "rural_water_tanks": "rural_heat_store",
1039
        },
1040
        inplace=True,
1041
    )
1042
1043
    neighbors[~neighbors.carrier.isin(["AC"])].to_postgis(
1044
        "egon_etrago_bus",
1045
        engine,
1046
        schema="grid",
1047
        if_exists="append",
1048
        index=True,
1049
        index_label="bus_id",
1050
    )
1051
1052
    # prepare and write neighboring crossborder lines to etrago tables
1053
    def lines_to_etrago(neighbor_lines=neighbor_lines, scn="eGon100RE"):
1054
        neighbor_lines["scn_name"] = scn
1055
        neighbor_lines["cables"] = 3 * neighbor_lines["num_parallel"].astype(
1056
            int
1057
        )
1058
        neighbor_lines["s_nom"] = neighbor_lines["s_nom_min"]
1059
1060
        for i in [
1061
            "Line",
1062
            "x_pu_eff",
1063
            "r_pu_eff",
1064
            "sub_network",
1065
            "x_pu",
1066
            "r_pu",
1067
            "g_pu",
1068
            "b_pu",
1069
            "s_nom_opt",
1070
            "i_nom",
1071
            "dc",
1072
        ]:
1073
            neighbor_lines = neighbor_lines.drop(i, axis=1)
1074
1075
        # Define geometry and add to lines dataframe as 'topo'
1076
        gdf = gpd.GeoDataFrame(index=neighbor_lines.index)
1077
        gdf["geom_bus0"] = neighbors.geom[neighbor_lines.bus0].values
1078
        gdf["geom_bus1"] = neighbors.geom[neighbor_lines.bus1].values
1079
        gdf["geometry"] = gdf.apply(
1080
            lambda x: LineString([x["geom_bus0"], x["geom_bus1"]]), axis=1
1081
        )
1082
1083
        neighbor_lines = (
1084
            gpd.GeoDataFrame(neighbor_lines, geometry=gdf["geometry"])
1085
            .rename_geometry("topo")
1086
            .set_crs(4326)
1087
        )
1088
1089
        neighbor_lines["lifetime"] = get_sector_parameters("electricity", scn)[
1090
            "lifetime"
1091
        ]["ac_ehv_overhead_line"]
1092
1093
        neighbor_lines.to_postgis(
1094
            "egon_etrago_line",
1095
            engine,
1096
            schema="grid",
1097
            if_exists="append",
1098
            index=True,
1099
            index_label="line_id",
1100
        )
1101
1102
    lines_to_etrago(neighbor_lines=neighbor_lines, scn="eGon100RE")
1103
1104
    def links_to_etrago(neighbor_links, scn="eGon100RE", extendable=True):
1105
        """Prepare and write neighboring crossborder links to eTraGo table
1106
1107
        This function prepare the neighboring crossborder links
1108
        generated the PyPSA-eur-sec (p-e-s) run by:
1109
          * Delete the useless columns
1110
          * If extendable is false only (non default case):
1111
              * Replace p_nom = 0 with the p_nom_op values (arrising
1112
                from the p-e-s optimisation)
1113
              * Setting p_nom_extendable to false
1114
          * Add geomtry to the links: 'geom' and 'topo' columns
1115
          * Change the name of the carriers to have the consistent in
1116
            eGon-data
1117
1118
        The function insert then the link to the eTraGo table and has
1119
        no return.
1120
1121
        Parameters
1122
        ----------
1123
        neighbor_links : pandas.DataFrame
1124
            Dataframe containing the neighboring crossborder links
1125
        scn_name : str
1126
            Name of the scenario
1127
        extendable : bool
1128
            Boolean expressing if the links should be extendable or not
1129
1130
        Returns
1131
        -------
1132
        None
1133
1134
        """
1135
        neighbor_links["scn_name"] = scn
1136
1137
        dropped_carriers = [
1138
            "Link",
1139
            "geometry",
1140
            "tags",
1141
            "under_construction",
1142
            "underground",
1143
            "underwater_fraction",
1144
            "bus2",
1145
            "bus3",
1146
            "bus4",
1147
            "efficiency2",
1148
            "efficiency3",
1149
            "efficiency4",
1150
            "lifetime",
1151
            "pipe_retrofit",
1152
            "committable",
1153
            "start_up_cost",
1154
            "shut_down_cost",
1155
            "min_up_time",
1156
            "min_down_time",
1157
            "up_time_before",
1158
            "down_time_before",
1159
            "ramp_limit_up",
1160
            "ramp_limit_down",
1161
            "ramp_limit_start_up",
1162
            "ramp_limit_shut_down",
1163
            "length_original",
1164
            "reversed",
1165
            "location",
1166
            "project_status",
1167
            "dc",
1168
            "voltage",
1169
        ]
1170
1171
        if extendable:
1172
            dropped_carriers.append("p_nom_opt")
1173
            neighbor_links = neighbor_links.drop(
1174
                columns=dropped_carriers,
1175
                errors="ignore",
1176
            )
1177
1178
        else:
1179
            dropped_carriers.append("p_nom")
1180
            dropped_carriers.append("p_nom_extendable")
1181
            neighbor_links = neighbor_links.drop(
1182
                columns=dropped_carriers,
1183
                errors="ignore",
1184
            )
1185
            neighbor_links = neighbor_links.rename(
1186
                columns={"p_nom_opt": "p_nom"}
1187
            )
1188
            neighbor_links["p_nom_extendable"] = False
1189
1190
        if neighbor_links.empty:
1191
            print("No links selected")
1192
            return
1193
1194
        # Define geometry and add to lines dataframe as 'topo'
1195
        gdf = gpd.GeoDataFrame(
1196
            index=neighbor_links.index,
1197
            data={
1198
                "geom_bus0": neighbors.loc[neighbor_links.bus0, "geom"].values,
1199
                "geom_bus1": neighbors.loc[neighbor_links.bus1, "geom"].values,
1200
            },
1201
        )
1202
1203
        gdf["geometry"] = gdf.apply(
1204
            lambda x: LineString([x["geom_bus0"], x["geom_bus1"]]), axis=1
1205
        )
1206
1207
        neighbor_links = (
1208
            gpd.GeoDataFrame(neighbor_links, geometry=gdf["geometry"])
1209
            .rename_geometry("topo")
1210
            .set_crs(4326)
1211
        )
1212
1213
        # Unify carrier names
1214
        neighbor_links.carrier = neighbor_links.carrier.str.replace(" ", "_")
1215
1216
        neighbor_links.carrier.replace(
1217
            {
1218
                "H2_Electrolysis": "power_to_H2",
1219
                "H2_Fuel_Cell": "H2_to_power",
1220
                "H2_pipeline_retrofitted": "H2_retrofit",
1221
                "SMR": "CH4_to_H2",
1222
                "Sabatier": "H2_to_CH4",
1223
                "gas_for_industry": "CH4_for_industry",
1224
                "gas_pipeline": "CH4",
1225
                "urban_central_gas_boiler": "central_gas_boiler",
1226
                "urban_central_resistive_heater": "central_resistive_heater",
1227
                "urban_central_water_tanks_charger": "central_heat_store_charger",
1228
                "urban_central_water_tanks_discharger": "central_heat_store_discharger",
1229
                "rural_water_tanks_charger": "rural_heat_store_charger",
1230
                "rural_water_tanks_discharger": "rural_heat_store_discharger",
1231
                "urban_central_gas_CHP": "central_gas_CHP",
1232
                "urban_central_air_heat_pump": "central_heat_pump",
1233
                "rural_ground_heat_pump": "rural_heat_pump",
1234
            },
1235
            inplace=True,
1236
        )
1237
1238
        H2_links = {
1239
            "H2_to_CH4": "H2_to_CH4",
1240
            "H2_to_power": "H2_to_power",
1241
            "power_to_H2": "power_to_H2_system",
1242
            "CH4_to_H2": "CH4_to_H2",
1243
        }
1244
1245
        for c in H2_links.keys():
1246
1247
            neighbor_links.loc[
1248
                (neighbor_links.carrier == c),
1249
                "lifetime",
1250
            ] = get_sector_parameters("gas", "eGon100RE")["lifetime"][
1251
                H2_links[c]
1252
            ]
1253
1254
        neighbor_links.to_postgis(
1255
            "egon_etrago_link",
1256
            engine,
1257
            schema="grid",
1258
            if_exists="append",
1259
            index=True,
1260
            index_label="link_id",
1261
        )
1262
1263
    extendable_links_carriers = [
1264
        "battery charger",
1265
        "battery discharger",
1266
        "home battery charger",
1267
        "home battery discharger",
1268
        "rural water tanks charger",
1269
        "rural water tanks discharger",
1270
        "urban central water tanks charger",
1271
        "urban central water tanks discharger",
1272
        "urban decentral water tanks charger",
1273
        "urban decentral water tanks discharger",
1274
        "H2 Electrolysis",
1275
        "H2 Fuel Cell",
1276
        "SMR",
1277
        "Sabatier",
1278
    ]
1279
1280
    # delete unwanted carriers for eTraGo
1281
    excluded_carriers = [
1282
        "gas for industry CC",
1283
        "SMR CC",
1284
        "DAC",
1285
    ]
1286
    neighbor_links = neighbor_links[
1287
        ~neighbor_links.carrier.isin(excluded_carriers)
1288
    ]
1289
1290
    # Combine CHP_CC and CHP
1291
    chp_cc = neighbor_links[
1292
        neighbor_links.carrier == "urban central gas CHP CC"
1293
    ]
1294
    for index, row in chp_cc.iterrows():
1295
        neighbor_links.loc[
1296
            neighbor_links.Link == row.Link.replace("CHP CC", "CHP"),
1297
            "p_nom_opt",
1298
        ] += row.p_nom_opt
1299
        neighbor_links.loc[
1300
            neighbor_links.Link == row.Link.replace("CHP CC", "CHP"), "p_nom"
1301
        ] += row.p_nom
1302
        neighbor_links.drop(index, inplace=True)
1303
1304
    # Combine heat pumps
1305
    # Like in Germany, there are air heat pumps in central heat grids
1306
    # and ground heat pumps in rural areas
1307
    rural_air = neighbor_links[neighbor_links.carrier == "rural air heat pump"]
1308
    for index, row in rural_air.iterrows():
1309
        neighbor_links.loc[
1310
            neighbor_links.Link == row.Link.replace("air", "ground"),
1311
            "p_nom_opt",
1312
        ] += row.p_nom_opt
1313
        neighbor_links.loc[
1314
            neighbor_links.Link == row.Link.replace("air", "ground"), "p_nom"
1315
        ] += row.p_nom
1316
        neighbor_links.drop(index, inplace=True)
1317
    links_to_etrago(
1318
        neighbor_links[neighbor_links.carrier.isin(extendable_links_carriers)],
1319
        "eGon100RE",
1320
    )
1321
    links_to_etrago(
1322
        neighbor_links[
1323
            ~neighbor_links.carrier.isin(extendable_links_carriers)
1324
        ],
1325
        "eGon100RE",
1326
        extendable=False,
1327
    )
1328
    # Include links time-series
1329
    # For heat_pumps
1330
    hp = neighbor_links[neighbor_links["carrier"].str.contains("heat pump")]
1331
1332
    neighbor_eff_t = network_prepared.links_t["efficiency"][
1333
        hp[hp.Link.isin(network_prepared.links_t["efficiency"].columns)].index
1334
    ]
1335
1336
    missing_hp = hp[~hp["Link"].isin(neighbor_eff_t.columns)].Link
1337
1338
    eff_timeseries = network_prepared.links_t["efficiency"].copy()
1339
    for met in missing_hp:  # met: missing efficiency timeseries
1340
        try:
1341
            neighbor_eff_t[met] = eff_timeseries.loc[:, met[0:-5]]
1342
        except:
1343
            print(f"There are not timeseries for heat_pump {met}")
1344
1345
    for i in neighbor_eff_t.columns:
1346
        new_index = neighbor_links[neighbor_links["Link"] == i].index
1347
        neighbor_eff_t.rename(columns={i: new_index[0]}, inplace=True)
1348
1349
    # Include links time-series
1350
    # For ev_chargers
1351
    ev = neighbor_links[neighbor_links["carrier"].str.contains("BEV charger")]
1352
1353
    ev_p_max_pu = network_prepared.links_t["p_max_pu"][
1354
        ev[ev.Link.isin(network_prepared.links_t["p_max_pu"].columns)].index
1355
    ]
1356
1357
    missing_ev = ev[~ev["Link"].isin(ev_p_max_pu.columns)].Link
1358
1359
    ev_p_max_pu_timeseries = network_prepared.links_t["p_max_pu"].copy()
1360
    for mct in missing_ev:  # evt: missing charger timeseries
1361
        try:
1362
            ev_p_max_pu[mct] = ev_p_max_pu_timeseries.loc[:, mct[0:-5]]
1363
        except:
1364
            print(f"There are not timeseries for EV charger {mct}")
1365
1366
    for i in ev_p_max_pu.columns:
1367
        new_index = neighbor_links[neighbor_links["Link"] == i].index
1368
        ev_p_max_pu.rename(columns={i: new_index[0]}, inplace=True)
1369
1370
    # prepare neighboring generators for etrago tables
1371
    neighbor_gens["scn_name"] = "eGon100RE"
1372
    neighbor_gens["p_nom"] = neighbor_gens["p_nom_opt"]
1373
    neighbor_gens["p_nom_extendable"] = False
1374
1375
    # Unify carrier names
1376
    neighbor_gens.carrier = neighbor_gens.carrier.str.replace(" ", "_")
1377
1378
    neighbor_gens.carrier.replace(
1379
        {
1380
            "onwind": "wind_onshore",
1381
            "ror": "run_of_river",
1382
            "offwind-ac": "wind_offshore",
1383
            "offwind-dc": "wind_offshore",
1384
            "offwind-float": "wind_offshore",
1385
            "urban_central_solar_thermal": "urban_central_solar_thermal_collector",
1386
            "residential_rural_solar_thermal": "residential_rural_solar_thermal_collector",
1387
            "services_rural_solar_thermal": "services_rural_solar_thermal_collector",
1388
            "solar-hsat": "solar",
1389
        },
1390
        inplace=True,
1391
    )
1392
1393
    for i in [
1394
        "Generator",
1395
        "weight",
1396
        "lifetime",
1397
        "p_set",
1398
        "q_set",
1399
        "p_nom_opt",
1400
        "e_sum_min",
1401
        "e_sum_max",
1402
    ]:
1403
        neighbor_gens = neighbor_gens.drop(i, axis=1)
1404
1405
    neighbor_gens.to_sql(
1406
        "egon_etrago_generator",
1407
        engine,
1408
        schema="grid",
1409
        if_exists="append",
1410
        index=True,
1411
        index_label="generator_id",
1412
    )
1413
1414
    # prepare neighboring loads for etrago tables
1415
    neighbor_loads["scn_name"] = "eGon100RE"
1416
1417
    # Unify carrier names
1418
    neighbor_loads.carrier = neighbor_loads.carrier.str.replace(" ", "_")
1419
1420
    neighbor_loads.carrier.replace(
1421
        {
1422
            "electricity": "AC",
1423
            "DC": "AC",
1424
            "industry_electricity": "AC",
1425
            "H2_pipeline_retrofitted": "H2_system_boundary",
1426
            "gas_pipeline": "CH4_system_boundary",
1427
            "gas_for_industry": "CH4_for_industry",
1428
            "urban_central_heat": "central_heat",
1429
        },
1430
        inplace=True,
1431
    )
1432
1433
    neighbor_loads = neighbor_loads.drop(
1434
        columns=["Load"],
1435
        errors="ignore",
1436
    )
1437
1438
    neighbor_loads.to_sql(
1439
        "egon_etrago_load",
1440
        engine,
1441
        schema="grid",
1442
        if_exists="append",
1443
        index=True,
1444
        index_label="load_id",
1445
    )
1446
1447
    # prepare neighboring stores for etrago tables
1448
    neighbor_stores["scn_name"] = "eGon100RE"
1449
1450
    # Unify carrier names
1451
    neighbor_stores.carrier = neighbor_stores.carrier.str.replace(" ", "_")
1452
1453
    neighbor_stores.carrier.replace(
1454
        {
1455
            "Li_ion": "battery",
1456
            "gas": "CH4",
1457
            "urban_central_water_tanks": "central_heat_store",
1458
            "rural_water_tanks": "rural_heat_store",
1459
            "EV_battery": "battery_storage",
1460
        },
1461
        inplace=True,
1462
    )
1463
    neighbor_stores.loc[
1464
        (
1465
            (neighbor_stores.e_nom_max <= 1e9)
1466
            & (neighbor_stores.carrier == "H2_Store")
1467
        ),
1468
        "carrier",
1469
    ] = "H2_underground"
1470
    neighbor_stores.loc[
1471
        (
1472
            (neighbor_stores.e_nom_max > 1e9)
1473
            & (neighbor_stores.carrier == "H2_Store")
1474
        ),
1475
        "carrier",
1476
    ] = "H2_overground"
1477
1478
    for i in [
1479
        "Store",
1480
        "p_set",
1481
        "q_set",
1482
        "e_nom_opt",
1483
        "lifetime",
1484
        "e_initial_per_period",
1485
        "e_cyclic_per_period",
1486
        "location",
1487
    ]:
1488
        neighbor_stores = neighbor_stores.drop(i, axis=1, errors="ignore")
1489
1490
    for c in ["H2_underground", "H2_overground"]:
1491
        neighbor_stores.loc[
1492
            (neighbor_stores.carrier == c),
1493
            "lifetime",
1494
        ] = get_sector_parameters("gas", "eGon100RE")["lifetime"][c]
1495
1496
    neighbor_stores.to_sql(
1497
        "egon_etrago_store",
1498
        engine,
1499
        schema="grid",
1500
        if_exists="append",
1501
        index=True,
1502
        index_label="store_id",
1503
    )
1504
1505
    # prepare neighboring storage_units for etrago tables
1506
    neighbor_storage["scn_name"] = "eGon100RE"
1507
1508
    # Unify carrier names
1509
    neighbor_storage.carrier = neighbor_storage.carrier.str.replace(" ", "_")
1510
1511
    neighbor_storage.carrier.replace(
1512
        {"PHS": "pumped_hydro", "hydro": "reservoir"}, inplace=True
1513
    )
1514
1515
    for i in [
1516
        "StorageUnit",
1517
        "p_nom_opt",
1518
        "state_of_charge_initial_per_period",
1519
        "cyclic_state_of_charge_per_period",
1520
    ]:
1521
        neighbor_storage = neighbor_storage.drop(i, axis=1, errors="ignore")
1522
1523
    neighbor_storage.to_sql(
1524
        "egon_etrago_storage",
1525
        engine,
1526
        schema="grid",
1527
        if_exists="append",
1528
        index=True,
1529
        index_label="storage_id",
1530
    )
1531
1532
    # writing neighboring loads_t p_sets to etrago tables
1533
1534
    neighbor_loads_t_etrago = pd.DataFrame(
1535
        columns=["scn_name", "temp_id", "p_set"],
1536
        index=neighbor_loads_t.columns,
1537
    )
1538
    neighbor_loads_t_etrago["scn_name"] = "eGon100RE"
1539
    neighbor_loads_t_etrago["temp_id"] = 1
1540
    for i in neighbor_loads_t.columns:
1541
        neighbor_loads_t_etrago["p_set"][i] = neighbor_loads_t[
1542
            i
1543
        ].values.tolist()
1544
1545
    neighbor_loads_t_etrago.to_sql(
1546
        "egon_etrago_load_timeseries",
1547
        engine,
1548
        schema="grid",
1549
        if_exists="append",
1550
        index=True,
1551
        index_label="load_id",
1552
    )
1553
1554
    # writing neighboring link_t efficiency and p_max_pu to etrago tables
1555
    neighbor_link_t_etrago = pd.DataFrame(
1556
        columns=["scn_name", "temp_id", "p_max_pu", "efficiency"],
1557
        index=neighbor_eff_t.columns.to_list() + ev_p_max_pu.columns.to_list(),
1558
    )
1559
    neighbor_link_t_etrago["scn_name"] = "eGon100RE"
1560
    neighbor_link_t_etrago["temp_id"] = 1
1561
    for i in neighbor_eff_t.columns:
1562
        neighbor_link_t_etrago["efficiency"][i] = neighbor_eff_t[
1563
            i
1564
        ].values.tolist()
1565
    for i in ev_p_max_pu.columns:
1566
        neighbor_link_t_etrago["p_max_pu"][i] = ev_p_max_pu[i].values.tolist()
1567
1568
    neighbor_link_t_etrago.to_sql(
1569
        "egon_etrago_link_timeseries",
1570
        engine,
1571
        schema="grid",
1572
        if_exists="append",
1573
        index=True,
1574
        index_label="link_id",
1575
    )
1576
1577
    # writing neighboring generator_t p_max_pu to etrago tables
1578
    neighbor_gens_t_etrago = pd.DataFrame(
1579
        columns=["scn_name", "temp_id", "p_max_pu"],
1580
        index=neighbor_gens_t.columns,
1581
    )
1582
    neighbor_gens_t_etrago["scn_name"] = "eGon100RE"
1583
    neighbor_gens_t_etrago["temp_id"] = 1
1584
    for i in neighbor_gens_t.columns:
1585
        neighbor_gens_t_etrago["p_max_pu"][i] = neighbor_gens_t[
1586
            i
1587
        ].values.tolist()
1588
1589
    neighbor_gens_t_etrago.to_sql(
1590
        "egon_etrago_generator_timeseries",
1591
        engine,
1592
        schema="grid",
1593
        if_exists="append",
1594
        index=True,
1595
        index_label="generator_id",
1596
    )
1597
1598
    # writing neighboring stores_t e_min_pu to etrago tables
1599
    neighbor_stores_t_etrago = pd.DataFrame(
1600
        columns=["scn_name", "temp_id", "e_min_pu"],
1601
        index=neighbor_stores_t.columns,
1602
    )
1603
    neighbor_stores_t_etrago["scn_name"] = "eGon100RE"
1604
    neighbor_stores_t_etrago["temp_id"] = 1
1605
    for i in neighbor_stores_t.columns:
1606
        neighbor_stores_t_etrago["e_min_pu"][i] = neighbor_stores_t[
1607
            i
1608
        ].values.tolist()
1609
1610
    neighbor_stores_t_etrago.to_sql(
1611
        "egon_etrago_store_timeseries",
1612
        engine,
1613
        schema="grid",
1614
        if_exists="append",
1615
        index=True,
1616
        index_label="store_id",
1617
    )
1618
1619
    # writing neighboring storage_units inflow to etrago tables
1620
    neighbor_storage_t_etrago = pd.DataFrame(
1621
        columns=["scn_name", "temp_id", "inflow"],
1622
        index=neighbor_storage_t.columns,
1623
    )
1624
    neighbor_storage_t_etrago["scn_name"] = "eGon100RE"
1625
    neighbor_storage_t_etrago["temp_id"] = 1
1626
    for i in neighbor_storage_t.columns:
1627
        neighbor_storage_t_etrago["inflow"][i] = neighbor_storage_t[
1628
            i
1629
        ].values.tolist()
1630
1631
    neighbor_storage_t_etrago.to_sql(
1632
        "egon_etrago_storage_timeseries",
1633
        engine,
1634
        schema="grid",
1635
        if_exists="append",
1636
        index=True,
1637
        index_label="storage_id",
1638
    )
1639
1640
    # writing neighboring lines_t s_max_pu to etrago tables
1641
    if not network_solved.lines_t["s_max_pu"].empty:
1642
        neighbor_lines_t_etrago = pd.DataFrame(
1643
            columns=["scn_name", "s_max_pu"], index=neighbor_lines_t.columns
1644
        )
1645
        neighbor_lines_t_etrago["scn_name"] = "eGon100RE"
1646
1647
        for i in neighbor_lines_t.columns:
1648
            neighbor_lines_t_etrago["s_max_pu"][i] = neighbor_lines_t[
1649
                i
1650
            ].values.tolist()
1651
1652
        neighbor_lines_t_etrago.to_sql(
1653
            "egon_etrago_line_timeseries",
1654
            engine,
1655
            schema="grid",
1656
            if_exists="append",
1657
            index=True,
1658
            index_label="line_id",
1659
        )
1660
1661
1662 View Code Duplication
def prepared_network(planning_horizon=3):
0 ignored issues
show
Duplication introduced by
This code seems to be duplicated in your project.
Loading history...
1663
    if egon.data.config.settings()["egon-data"]["--run-pypsa-eur"]:
1664
        with open(
1665
            __path__[0] + "/datasets/pypsaeur/config_prepare.yaml", "r"
1666
        ) as stream:
1667
            data_config = yaml.safe_load(stream)
1668
1669
        target_file = (
1670
            Path(".")
1671
            / "run-pypsa-eur"
1672
            / "pypsa-eur"
1673
            / "results"
1674
            / data_config["run"]["name"]
1675
            / "prenetworks"
1676
            / f"base_s_{data_config['scenario']['clusters'][0]}"
1677
            f"_l{data_config['scenario']['ll'][0]}"
1678
            f"_{data_config['scenario']['opts'][0]}"
1679
            f"_{data_config['scenario']['sector_opts'][0]}"
1680
            f"_{data_config['scenario']['planning_horizons'][planning_horizon]}.nc"
1681
        )
1682
1683
    else:
1684
        target_file = (
1685
            Path(".")
1686
            / "data_bundle_powerd_data"
1687
            / "pypsa_eur"
1688
            / "21122024_3h_clean_run"
1689
            / "results"
1690
            / "prenetworks"
1691
            / "prenetwork_post-manipulate_pre-solve"
1692
            / "base_s_39_lc1.25__cb40ex0-T-H-I-B-solar+p3-dist1_2045.nc"
1693
        )
1694
1695
    return pypsa.Network(target_file.absolute().as_posix())
1696
1697
1698
def overwrite_H2_pipeline_share():
1699
    """Overwrite retrofitted_CH4pipeline-to-H2pipeline_share value
1700
1701
    Overwrite retrofitted_CH4pipeline-to-H2pipeline_share in the
1702
    scenario parameter table if p-e-s is run.
1703
    This function write in the database and has no return.
1704
1705
    """
1706
    scn_name = "eGon100RE"
1707
    # Select source and target from dataset configuration
1708
    target = egon.data.config.datasets()["pypsa-eur-sec"]["target"]
1709
1710
    n = read_network()
1711
1712
    H2_pipelines = n.links[n.links["carrier"] == "H2 pipeline retrofitted"]
1713
    CH4_pipelines = n.links[n.links["carrier"] == "gas pipeline"]
1714
    H2_pipes_share = np.mean(
1715
        [
1716
            (i / j)
1717
            for i, j in zip(
1718
                H2_pipelines.p_nom_opt.to_list(), CH4_pipelines.p_nom.to_list()
1719
            )
1720
        ]
1721
    )
1722
    logger.info(
1723
        "retrofitted_CH4pipeline-to-H2pipeline_share = " + str(H2_pipes_share)
1724
    )
1725
1726
    parameters = db.select_dataframe(
1727
        f"""
1728
        SELECT *
1729
        FROM {target['scenario_parameters']['schema']}.{target['scenario_parameters']['table']}
1730
        WHERE name = '{scn_name}'
1731
        """
1732
    )
1733
1734
    gas_param = parameters.loc[0, "gas_parameters"]
1735
    gas_param["retrofitted_CH4pipeline-to-H2pipeline_share"] = H2_pipes_share
1736
    gas_param = json.dumps(gas_param)
1737
1738
    # Update data in db
1739
    db.execute_sql(
1740
        f"""
1741
    UPDATE {target['scenario_parameters']['schema']}.{target['scenario_parameters']['table']}
1742
    SET gas_parameters = '{gas_param}'
1743
    WHERE name = '{scn_name}';
1744
    """
1745
    )
1746
1747
1748
def update_electrical_timeseries_germany(network):
1749
    """Replace electrical demand time series in Germany with data from egon-data
1750
1751
    Parameters
1752
    ----------
1753
    network : pypsa.Network
1754
        Network including demand time series from pypsa-eur
1755
1756
    Returns
1757
    -------
1758
    network : pypsa.Network
1759
        Network including electrical demand time series in Germany from egon-data
1760
1761
    """
1762
    year = network.year
1763
    skip = network.snapshot_weightings.objective.iloc[0].astype("int")
1764
    df = pd.read_csv(
1765
        "input-pypsa-eur-sec/electrical_demand_timeseries_DE_eGon100RE.csv"
1766
    )
1767
1768
    annual_demand = pd.Series(index=[2019, 2037])
1769
    annual_demand_industry = pd.Series(index=[2019, 2037])
1770
    # Define values from status2019 for interpolation
1771
    # Residential and service (in TWh)
1772
    annual_demand.loc[2019] = 124.71 + 143.26
1773
    # Industry (in TWh)
1774
    annual_demand_industry.loc[2019] = 241.925
1775
1776
    # Define values from NEP 2023 scenario B 2037 for interpolation
1777
    # Residential and service (in TWh)
1778
    annual_demand.loc[2037] = 104 + 153.1
1779
    # Industry (in TWh)
1780
    annual_demand_industry.loc[2037] = 334.0
1781
1782
    # Set interpolated demands for years between 2019 and 2045
1783
    if year < 2037:
1784
        # Calculate annual demands for year by linear interpolating between
1785
        # 2019 and 2037
1786
        # Done seperatly for industry and residential and service to fit
1787
        # to pypsa-eurs structure
1788
        annual_rate = (annual_demand.loc[2037] - annual_demand.loc[2019]) / (
1789
            2037 - 2019
1790
        )
1791
        annual_demand_year = annual_demand.loc[2019] + annual_rate * (
1792
            year - 2019
1793
        )
1794
1795
        annual_rate_industry = (
1796
            annual_demand_industry.loc[2037] - annual_demand_industry.loc[2019]
1797
        ) / (2037 - 2019)
1798
        annual_demand_year_industry = annual_demand_industry.loc[
1799
            2019
1800
        ] + annual_rate_industry * (year - 2019)
1801
1802
        # Scale time series for 100% scenario with the annual demands
1803
        # The shape of the curve is taken from the 100% scenario since the
1804
        # same weather and calender year is used there
1805
        network.loads_t.p_set.loc[:, "DE0 0"] = (
1806
            df["residential_and_service"].loc[::skip]
1807
            / df["residential_and_service"].sum()
1808
            * annual_demand_year
1809
            * 1e6
1810
        ).values
1811
1812
        network.loads_t.p_set.loc[:, "DE0 0 industry electricity"] = (
1813
            df["industry"].loc[::skip]
1814
            / df["industry"].sum()
1815
            * annual_demand_year_industry
1816
            * 1e6
1817
        ).values
1818
1819
    elif year == 2045:
1820
        network.loads_t.p_set.loc[:, "DE0 0"] = df[
1821
            "residential_and_service"
1822
        ].loc[::skip]
1823
1824
        network.loads_t.p_set.loc[:, "DE0 0 industry electricity"] = (
1825
            df["industry"].loc[::skip].values
1826
        )
1827
1828
    else:
1829
        print(
1830
            "Scaling not implemented for years between 2037 and 2045 and beyond."
1831
        )
1832
        return
1833
1834
    network.loads.loc["DE0 0 industry electricity", "p_set"] = 0.0
1835
1836
    return network
1837
1838
1839
def geothermal_district_heating(network):
1840
    """Add the option to build geothermal power plants in district heating in Germany
1841
1842
    Parameters
1843
    ----------
1844
    network : pypsa.Network
1845
        Network from pypsa-eur without geothermal generators
1846
1847
    Returns
1848
    -------
1849
    network : pypsa.Network
1850
        Updated network with geothermal generators
1851
1852
    """
1853
1854
    costs_and_potentials = pd.read_csv(
1855
        "input-pypsa-eur-sec/geothermal_potential_germany.csv"
1856
    )
1857
1858
    network.add("Carrier", "urban central geo thermal")
1859
1860
    for i, row in costs_and_potentials.iterrows():
1861
        # Set lifetime of geothermal plant to 30 years based on:
1862
        # Ableitung eines Korridors für den Ausbau der erneuerbaren Wärme im Gebäudebereich,
1863
        # Beuth Hochschule für Technik, Berlin ifeu – Institut für Energie- und Umweltforschung Heidelberg GmbH
1864
        # Februar 2017
1865
        lifetime_geothermal = 30
1866
1867
        network.add(
1868
            "Generator",
1869
            f"DE0 0 urban central geo thermal {i}",
1870
            bus="DE0 0 urban central heat",
1871
            carrier="urban central geo thermal",
1872
            p_nom_extendable=True,
1873
            p_nom_max=row["potential [MW]"],
1874
            capital_cost=annualize_capital_costs(
1875
                row["cost [EUR/kW]"] * 1e6, lifetime_geothermal, 0.07
1876
            ),
1877
        )
1878
    return network
1879
1880
1881
def h2_overground_stores(network):
1882
    """Add hydrogen overground stores to each hydrogen node
1883
1884
    In pypsa-eur, only countries without the potential of underground hydrogen
1885
    stores have to option to build overground hydrogen tanks.
1886
    Overground stores are more expensive, but are not resitcted by the geological
1887
    potential. To allow higher hydrogen store capacities in each country, optional
1888
    hydogen overground tanks are also added to node with a potential for
1889
    underground stores.
1890
1891
    Parameters
1892
    ----------
1893
    network : pypsa.Network
1894
        Network without hydrogen overground stores at each hydrogen node
1895
1896
    Returns
1897
    -------
1898
    network : pypsa.Network
1899
        Network with hydrogen overground stores at each hydrogen node
1900
1901
    """
1902
1903
    underground_h2_stores = network.stores[
1904
        (network.stores.carrier == "H2 Store")
1905
        & (network.stores.e_nom_max != np.inf)
1906
    ]
1907
1908
    overground_h2_stores = network.stores[
1909
        (network.stores.carrier == "H2 Store")
1910
        & (network.stores.e_nom_max == np.inf)
1911
    ]
1912
1913
    network.madd(
1914
        "Store",
1915
        underground_h2_stores.bus + " overground Store",
1916
        bus=underground_h2_stores.bus.values,
1917
        e_nom_extendable=True,
1918
        e_cyclic=True,
1919
        carrier="H2 Store",
1920
        capital_cost=overground_h2_stores.capital_cost.mean(),
1921
    )
1922
1923
    return network
1924
1925
1926
def update_heat_timeseries_germany(network):
1927
    network.loads
1928
    # Import heat demand curves for Germany from eGon-data
1929
    df_egon_heat_demand = pd.read_csv(
1930
        "input-pypsa-eur-sec/heat_demand_timeseries_DE_eGon100RE.csv"
1931
    )
1932
1933
    # Replace heat demand curves in Germany with values from eGon-data
1934
    network.loads_t.p_set.loc[:, "DE1 0 rural heat"] = (
1935
        df_egon_heat_demand.loc[:, "residential rural"].values
1936
        + df_egon_heat_demand.loc[:, "service rural"].values
1937
    )
1938
1939
    network.loads_t.p_set.loc[:, "DE1 0 urban central heat"] = (
1940
        df_egon_heat_demand.loc[:, "urban central"].values
1941
    )
1942
1943
    return network
1944
1945
1946
def drop_biomass(network):
1947
    carrier = "biomass"
1948
1949
    for c in network.iterate_components():
1950
        network.mremove(c.name, c.df[c.df.index.str.contains(carrier)].index)
1951
    return network
1952
1953
1954
def postprocessing_biomass_2045():
1955
1956
    network = read_network()
1957
    network = drop_biomass(network)
1958
1959
    with open(
1960
        __path__[0] + "/datasets/pypsaeur/config_solve.yaml", "r"
1961
    ) as stream:
1962
        data_config = yaml.safe_load(stream)
1963
1964
    target_file = (
1965
        Path(".")
1966
        / "run-pypsa-eur"
1967
        / "pypsa-eur"
1968
        / "results"
1969
        / data_config["run"]["name"]
1970
        / "postnetworks"
1971
        / f"base_s_{data_config['scenario']['clusters'][0]}"
1972
        f"_l{data_config['scenario']['ll'][0]}"
1973
        f"_{data_config['scenario']['opts'][0]}"
1974
        f"_{data_config['scenario']['sector_opts'][0]}"
1975
        f"_{data_config['scenario']['planning_horizons'][3]}.nc"
1976
    )
1977
1978
    network.export_to_netcdf(target_file)
1979
1980
1981
def drop_urban_decentral_heat(network):
1982
    carrier = "urban decentral heat"
1983
1984
    # Add urban decentral heat demand to urban central heat demand
1985
    for country in network.loads.loc[
1986
        network.loads.carrier == carrier, "bus"
1987
    ].str[:5]:
1988
1989
        if f"{country} {carrier}" in network.loads_t.p_set.columns:
1990
            network.loads_t.p_set[
1991
                f"{country} rural heat"
1992
            ] += network.loads_t.p_set[f"{country} {carrier}"]
1993
        else:
1994
            print(
1995
                f"""No time series available for {country} {carrier}.
1996
                  Using static p_set."""
1997
            )
1998
1999
            network.loads_t.p_set[
2000
                f"{country} rural heat"
2001
            ] += network.loads.loc[f"{country} {carrier}", "p_set"]
2002
2003
    # In some cases low-temperature heat for industry is connected to the urban
2004
    # decentral heat bus since there is no urban central heat bus.
2005
    # These loads are connected to the representatiive rural heat bus:
2006
    network.loads.loc[
2007
        (network.loads.bus.str.contains(carrier))
2008
        & (~network.loads.carrier.str.contains(carrier.replace(" heat", ""))),
2009
        "bus",
2010
    ] = network.loads.loc[
2011
        (network.loads.bus.str.contains(carrier))
2012
        & (~network.loads.carrier.str.contains(carrier.replace(" heat", ""))),
2013
        "bus",
2014
    ].str.replace(
2015
        "urban decentral", "rural"
2016
    )
2017
2018
    # Drop componentents attached to urban decentral heat
2019
    for c in network.iterate_components():
2020
        network.mremove(
2021
            c.name, c.df[c.df.index.str.contains("urban decentral")].index
2022
        )
2023
2024
    return network
2025
2026
2027
def district_heating_shares(network):
2028
    df = pd.read_csv(
2029
        "data_bundle_powerd_data/district_heating_shares_egon.csv"
2030
    ).set_index("country_code")
2031
2032
    heat_demand_per_country = (
2033
        network.loads_t.p_set[
2034
            network.loads[
2035
                (network.loads.carrier.str.contains("heat"))
2036
                & network.loads.index.isin(network.loads_t.p_set.columns)
2037
            ].index
2038
        ]
2039
        .groupby(network.loads.bus.str[:5], axis=1)
2040
        .sum()
2041
    )
2042
2043
    for country in heat_demand_per_country.columns:
2044
        network.loads_t.p_set[f"{country} urban central heat"] = (
2045
            heat_demand_per_country.loc[:, country].mul(
2046
                df.loc[country[:2]].values[0]
2047
            )
2048
        )
2049
        network.loads_t.p_set[f"{country} rural heat"] = (
2050
            heat_demand_per_country.loc[:, country].mul(
2051
                (1 - df.loc[country[:2]].values[0])
2052
            )
2053
        )
2054
2055
    # Drop links with undefined buses or carrier
2056
    network.mremove(
2057
        "Link",
2058
        network.links[
2059
            ~network.links.bus0.isin(network.buses.index.values)
2060
        ].index,
2061
    )
2062
    network.mremove(
2063
        "Link",
2064
        network.links[network.links.carrier == ""].index,
2065
    )
2066
2067
    return network
2068
2069
2070
def drop_new_gas_pipelines(network):
2071
    network.mremove(
2072
        "Link",
2073
        network.links[
2074
            network.links.index.str.contains("gas pipeline new")
2075
        ].index,
2076
    )
2077
2078
    return network
2079
2080
2081
def drop_fossil_gas(network):
2082
    network.mremove(
2083
        "Generator",
2084
        network.generators[network.generators.carrier == "gas"].index,
2085
    )
2086
2087
    return network
2088
2089
2090
def drop_conventional_power_plants(network):
2091
2092
    # Drop lignite and coal power plants in Germany
2093
    network.mremove(
2094
        "Link",
2095
        network.links[
2096
            (network.links.carrier.isin(["coal", "lignite"]))
2097
            & (network.links.bus1.str.startswith("DE"))
2098
        ].index,
2099
    )
2100
2101
    return network
2102
2103
2104
def rual_heat_technologies(network):
2105
    network.mremove(
2106
        "Link",
2107
        network.links[
2108
            network.links.index.str.contains("rural gas boiler")
2109
        ].index,
2110
    )
2111
2112
    network.mremove(
2113
        "Generator",
2114
        network.generators[
2115
            network.generators.carrier.str.contains("rural solar thermal")
2116
        ].index,
2117
    )
2118
2119
    return network
2120
2121
2122
def coal_exit_D():
2123
2124
    df = pd.read_csv(
2125
        "run-pypsa-eur/pypsa-eur/resources/powerplants_s_39.csv", index_col=0
2126
    )
2127
    df_de_coal = df[
2128
        (df.Country == "DE")
2129
        & ((df.Fueltype == "Lignite") | (df.Fueltype == "Hard Coal"))
2130
    ]
2131
    df_de_coal.loc[df_de_coal.DateOut.values >= 2035, "DateOut"] = 2034
2132
    df.loc[df_de_coal.index] = df_de_coal
2133
2134
    df.to_csv("run-pypsa-eur/pypsa-eur/resources/powerplants_s_39.csv")
2135
2136
2137
def offwind_potential_D(network, capacity_per_sqkm=4):
2138
2139
    offwind_ac_factor = 1942
2140
    offwind_dc_factor = 10768
2141
    offwind_float_factor = 134
2142
2143
    # set p_nom_max for German offshore with respect to capacity_per_sqkm = 4 instead of default 2 (which is applied for the rest of Europe)
2144
    network.generators.loc[
2145
        (network.generators.bus == "DE0 0")
2146
        & (network.generators.carrier == "offwind-ac"),
2147
        "p_nom_max",
2148
    ] = (
2149
        offwind_ac_factor * capacity_per_sqkm
2150
    )
2151
    network.generators.loc[
2152
        (network.generators.bus == "DE0 0")
2153
        & (network.generators.carrier == "offwind-dc"),
2154
        "p_nom_max",
2155
    ] = (
2156
        offwind_dc_factor * capacity_per_sqkm
2157
    )
2158
    network.generators.loc[
2159
        (network.generators.bus == "DE0 0")
2160
        & (network.generators.carrier == "offwind-float"),
2161
        "p_nom_max",
2162
    ] = (
2163
        offwind_float_factor * capacity_per_sqkm
2164
    )
2165
2166
    return network
2167
2168
2169
def additional_grid_expansion_2045(network):
2170
2171
    network.global_constraints.loc["lc_limit", "constant"] *= 1.05
2172
2173
    return network
2174
2175
2176
def execute():
2177
    if egon.data.config.settings()["egon-data"]["--run-pypsa-eur"]:
2178
        with open(
2179
            __path__[0] + "/datasets/pypsaeur/config.yaml", "r"
2180
        ) as stream:
2181
            data_config = yaml.safe_load(stream)
2182
2183
        if data_config["foresight"] == "myopic":
2184
2185
            print("Adjusting scenarios on the myopic pathway...")
2186
2187
            coal_exit_D()
2188
2189
            networks = pd.Series()
2190
2191
            for i in range(
2192
                0, len(data_config["scenario"]["planning_horizons"])
2193
            ):
2194
                nc_file = pd.Series(
2195
                    f"base_s_{data_config['scenario']['clusters'][0]}"
2196
                    f"_l{data_config['scenario']['ll'][0]}"
2197
                    f"_{data_config['scenario']['opts'][0]}"
2198
                    f"_{data_config['scenario']['sector_opts'][0]}"
2199
                    f"_{data_config['scenario']['planning_horizons'][i]}.nc"
2200
                )
2201
                networks = networks._append(nc_file)
2202
2203
            scn_path = pd.DataFrame(
2204
                index=["2025", "2030", "2035", "2045"],
2205
                columns=["prenetwork", "functions"],
2206
            )
2207
2208
            for year in scn_path.index:
2209
                scn_path.at[year, "prenetwork"] = networks[
2210
                    networks.str.contains(year)
2211
                ].values
2212
2213
            for year in ["2025", "2030", "2035"]:
2214
                scn_path.loc[year, "functions"] = [
2215
                    # drop_urban_decentral_heat,
2216
                    update_electrical_timeseries_germany,
2217
                    geothermal_district_heating,
2218
                    h2_overground_stores,
2219
                    drop_new_gas_pipelines,
2220
                    offwind_potential_D,
2221
                ]
2222
2223
            scn_path.loc["2045", "functions"] = [
2224
                drop_biomass,
2225
                # drop_urban_decentral_heat,
2226
                update_electrical_timeseries_germany,
2227
                geothermal_district_heating,
2228
                h2_overground_stores,
2229
                drop_new_gas_pipelines,
2230
                drop_fossil_gas,
2231
                offwind_potential_D,
2232
                additional_grid_expansion_2045,
2233
                # drop_conventional_power_plants,
2234
                # rual_heat_technologies, #To be defined
2235
            ]
2236
2237
            network_path = (
2238
                Path(".")
2239
                / "run-pypsa-eur"
2240
                / "pypsa-eur"
2241
                / "results"
2242
                / data_config["run"]["name"]
2243
                / "prenetworks"
2244
            )
2245
2246
            for scn in scn_path.index:
2247
                path = network_path / scn_path.at[scn, "prenetwork"]
2248
                network = pypsa.Network(path)
2249
                network.year = int(scn)
2250
                for manipulator in scn_path.at[scn, "functions"]:
2251
                    network = manipulator(network)
2252
                network.export_to_netcdf(path)
2253
2254
        elif (data_config["foresight"] == "overnight") & (
2255
            int(data_config["scenario"]["planning_horizons"][0]) > 2040
2256
        ):
2257
2258
            print("Adjusting overnight long-term scenario...")
2259
2260
            network_path = (
2261
                Path(".")
2262
                / "run-pypsa-eur"
2263
                / "pypsa-eur"
2264
                / "results"
2265
                / data_config["run"]["name"]
2266
                / "prenetworks"
2267
                / f"elec_s_{data_config['scenario']['clusters'][0]}"
2268
                f"_l{data_config['scenario']['ll'][0]}"
2269
                f"_{data_config['scenario']['opts'][0]}"
2270
                f"_{data_config['scenario']['sector_opts'][0]}"
2271
                f"_{data_config['scenario']['planning_horizons'][0]}.nc"
2272
            )
2273
2274
            network = pypsa.Network(network_path)
2275
2276
            network = drop_biomass(network)
2277
2278
            network = drop_urban_decentral_heat(network)
2279
2280
            network = district_heating_shares(network)
2281
2282
            network = update_heat_timeseries_germany(network)
2283
2284
            network = update_electrical_timeseries_germany(network)
2285
2286
            network = geothermal_district_heating(network)
2287
2288
            network = h2_overground_stores(network)
2289
2290
            network = drop_new_gas_pipelines(network)
2291
2292
            network = drop_fossil_gas(network)
2293
2294
            network = rual_heat_technologies(network)
2295
2296
            network.export_to_netcdf(network_path)
2297
2298
        else:
2299
            print(
2300
                f"""Adjustments on prenetworks are not implemented for
2301
                foresight option {data_config['foresight']} and
2302
                year int(data_config['scenario']['planning_horizons'][0].
2303
                Please check the pypsaeur.execute function.
2304
                """
2305
            )
2306
    else:
2307
        print("Pypsa-eur is not executed due to the settings of egon-data")
2308