Passed
Push — dev ( cdb453...6b0082 )
by
unknown
02:40 queued 01:00
created

heavy_duty_transport.download_hgv_data()   B

Complexity

Conditions 8

Size

Total Lines 37
Code Lines 24

Duplication

Lines 0
Ratio 0 %

Importance

Changes 0
Metric Value
eloc 24
dl 0
loc 37
rs 7.3333
c 0
b 0
f 0
cc 8
nop 0
1
"""
2
Heavy Duty Transport / Heavy Goods Vehicle (HGV)
3
4
Main module for preparation of model data (static and timeseries) for
5
heavy duty transport.
6
7
**Contents of this module**
8
* Creation of DB tables
9
* Download and preprocessing of vehicle registration data from BAST
10
* Calculation of hydrogen demand based on a Voronoi distribution of counted truck
11
  traffic among NUTS 3 regions.
12
* Write results to DB
13
* Map demand to H2 buses and write to DB
14
15
**Configuration**
16
17
The config of this dataset can be found in *datasets.yml* in section
18
*mobility_hgv*.
19
20
**Scenarios and variations**
21
22
Assumptions can be changed within the *datasets.yml*.
23
24
In the context of the eGon project, it is assumed that e-trucks will be completely
25
hydrogen-powered and in both scenarios the hydrogen consumption is assumed to be
26
6.68 kgH2 per 100 km with an additional
27
[supply chain leakage rate of 0.5 %](
28
https://www.energy.gov/eere/fuelcells/doe-technical-targets-hydrogen-delivery).
29
30
### Scenario NEP C 2035
31
32
The ramp-up figures are taken from [Scenario C 2035 Grid Development Plan 2021-2035](
33
https://www.netzentwicklungsplan.de/sites/default/files/paragraphs-files/
34
NEP_2035_V2021_2_Entwurf_Teil1.pdf). According to this, 100,000 e-trucks are expected
35
in Germany in 2035, each covering an average of 100,000 km per year. In total this means
36
10 Billion km.
37
38
### Scenario eGon100RE
39
40
In the case of the eGon100RE scenario it is assumed that the HGV traffic is completely
41
hydrogen-powered. The total freight traffic with 40 Billion km is taken from the
42
[BMWk Langfristszenarien GHG-emission free scenarios (SNF > 12 t zGG)](
43
https://www.langfristszenarien.de/enertile-explorer-wAssets/docs/
44
LFS3_Langbericht_Verkehr_final.pdf#page=17).
45
46
## Methodology
47
48
Using a Voronoi interpolation, the censuses of the BASt data is distributed according to
49
the area fractions of the Voronoi fields within each mv grid or any other geometries
50
like NUTS-3.
51
"""
52
from pathlib import Path
53
import csv
54
import zipfile
55
56
from loguru import logger
57
import requests
58
59
from egon.data import config, db
60
from egon.data.datasets import Dataset
61
from egon.data.datasets.emobility.heavy_duty_transport.create_h2_buses import (
62
    insert_hgv_h2_demand,
63
)
64
from egon.data.datasets.emobility.heavy_duty_transport.db_classes import (
65
    EgonHeavyDutyTransportVoronoi,
66
)
67
from egon.data.datasets.emobility.heavy_duty_transport.h2_demand_distribution import (
68
    run_egon_truck,
69
)
70
71
WORKING_DIR = Path(".", "heavy_duty_transport").resolve()
72
DATASET_CFG = config.datasets()["mobility_hgv"]
73
TESTMODE_OFF = (
74
    config.settings()["egon-data"]["--dataset-boundary"] == "Everything"
75
)
76
77
78
def create_tables():
79
    engine = db.engine()
80
    EgonHeavyDutyTransportVoronoi.__table__.drop(bind=engine, checkfirst=True)
81
    EgonHeavyDutyTransportVoronoi.__table__.create(
82
        bind=engine, checkfirst=True
83
    )
84
85
    logger.debug("Created tables.")
86
87
88
def download_hgv_data():
89
    sources = DATASET_CFG["original_data"]["sources"]
90
91
    # Create the folder, if it does not exist
92
    if not WORKING_DIR.is_dir():
93
        WORKING_DIR.mkdir(parents=True)
94
95
    url = sources["BAST"]["url"]
96
    file = WORKING_DIR / sources["BAST"]["file"]
97
98
    response = requests.get(url)
99
100
    with open(file, "w") as f:
101
        writer = csv.writer(f)
102
        for line in response.iter_lines():
103
            writer.writerow(line.decode("ISO-8859-1").split(";"))
104
105
    logger.debug("Downloaded BAST data.")
106
107
    if not TESTMODE_OFF:
108
        url = sources["NUTS"]["url"]
109
110
        r = requests.get(url, stream=True)
111
        file = WORKING_DIR / sources["NUTS"]["file"]
112
113
        with open(file, "wb") as fd:
114
            for chunk in r.iter_content(chunk_size=512):
115
                fd.write(chunk)
116
117
        directory = WORKING_DIR / "_".join(
118
            sources["NUTS"]["file"].split(".")[:-1]
119
        )
120
121
        with zipfile.ZipFile(file, "r") as zip_ref:
122
            zip_ref.extractall(directory)
123
124
        logger.debug("Downloaded NUTS data.")
125
126
127
class HeavyDutyTransport(Dataset):
128
    def __init__(self, dependencies):
129
        super().__init__(
130
            name="HeavyDutyTransport",
131
            version="0.0.1",
132
            dependencies=dependencies,
133
            tasks=(
134
                {
135
                    create_tables,
136
                    download_hgv_data,
137
                },
138
                run_egon_truck,
139
                insert_hgv_h2_demand,
140
            ),
141
        )
142