• Stars
    star
    263
  • Rank 154,773 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created almost 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This set of Python scripts allow you to convert pcap, pcapnp or pdml 5G protocol traces (Wireshark, tcpdump, ...) into SVG sequence diagrams.

5G Trace Visualizer

License

Please see the LICENSE for terms and conditions for use, reproduction, and distribution.

Summary

This set of Python scripts allow you to convert pcap, pcapng or pdml 5G protocol traces (Wireshark, tcpdump, ...) into SVG sequence diagrams.

It was born from the need to automatically convert 5G traces into something readable given that we needed to account for:

  • Mix of HTTP/2, 5G-NAS and PFCP protocols for 5G trace_visualizer
  • Additionally, GTP/GTP', Diameter when testing 4G/5G interoperability
  • Sequence details are quite tiring to check in the Wireshark GUI
  • Specific versions of Wireshark may be needed to decode specific versions of (e.g.) 5G-NAS
  • The shift to containers results into traces with multiple IP addresses that are dynamically allocated by k8s
  • Mapping of IPs to container names in the deployment, including Calico and Multus interfaces
  • In some cases, what is of interest are the exchanges between namespaces and not between containers
  • Mapping of IPs to VM names in the deployment
  • Different coloring of the different 5G protocols (NAS, HTTP/2, PFCP, ...), as well as differentiating between requests and responses where possible

We could not find a commercial tool doing exactly what we needed. While PlantUML can generate nice diagrams, doing those manually requires too much time. So we resorted to putting together this script.

Requirements

  • You need to have Java installed (executing the java command must launch Java). This is required because PlantUML runs on Java
  • plantuml.jar must be placed in the base directory (see [place plantuml.jar here.txt](place plantuml.jar here.txt)). This application was tested with the 2019.11 version (Apache Software License Version) of plantuml.jar. You can find it here.
  • Wireshark portable of the desired versions placed in the /wireshark folder. See instructions in folder.

Installation process for Linux

  1. clone the repo
  2. download and extract plantuml.jar in the base directory.
  3. sudo apt -y install wireshark tshark
  4. sudo apt -y install default-jre python3-pip
  5. sudo pip3 install --upgrade pyyaml packaging

example run command - python3 trace_visualizer.py -wireshark "OS" ./doc/free5gc.pcap

Application structure

The figure below summarizes what this small application does (SVG, PNG, Mermaid)

Application structure

Plotting Scripts

You will notice several plotting_xxx.ipynb files.

These are iPython scripts that make use of the implemented functionality to generate nice, interactive plots vbased on data from 5G traces.

In order to run the scripts you will need:

  • Jupyter Lab (I use Anaconda and that is what I will assume was installed for the sake of documentation)
  • Install NodeJS: conda install nodejs.
  • Plotly: used for plotting. In order to install it, you can follow the instructions here.

Since these scripts rely on parsing of 3rd party outputs, no assurance is given that these are up-to-date. You should consider them as just examples how you could accomplish such visualization(s).

The following scripts are included:

Parsing Spirent Result Files

File: plotting_parsing_spirent.ipynb

For those of you using Spirent for testing, you may need to quickly compare certain parameters (e.g. Basic Data Message One Way Trip Delay (micro-sec)).

The way Spirent stores test results is by means of an Excel file named <date>_RID-<test number>__<test name>.xls. You can use this script to scan a folder containing such Excel files and load data from each of them in a table you can use for comparing test runs.

Currently, the script only imports parameters from the L5-7 Client|Basic worksheet but can be easily extended. An example is provided to plot a comparison bar chart of the one-way delay for each test.

For obvious reasons, no example files are provided.

Plotting 5GC messages

File: plotting_pcap.ipynb

This cript provides some functionality to convert packet traces to DataFrame format and to plot the resulting data using plotly.

This script can be used to plot a 5GC packet capture on a time axis. Do note that we are just plotting the first plot_data element (you can trace multiple capture files simultanously). The color bars use the same protocol color code as the sequence diagram.

5GC visualization of messages over time

More interactive HTML version available here.

Plotting 5GC messages and k8s load

File: plotting_k8s_metrics.ipynb

This script shows a more complex use case where k8s KPIs and packet traces can be plotted on a common time axis (no example raw data provided for this case). The end result would look as shown below:

5GC visualization of messages over time

In the case of the k8s KPIs, the data needs to be in the format output by kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods/ per line.

Plotting User Plane latency

File: plotting_latency_analysis.ipynb

Given:

  • A test case where UDP or ICMP packets are transmitted between UE and DN
  • That each generated packet has a unique data payload
  • A packet trace of N3 packets (GTP-U)
  • A packet trace of N6 packets
  • That the hosts doing each capture are time-synchronized (or are the same host)
  • This is usually the case if you use commonly-used test tools to test User Plane (UP)

This script calculated the one-way delay for each single packet and plots a normalized histogram (i.e. a distribution) of the packet latency.

The packet parsing does not use the lxml library for parsing because it proved to be too resource-intensive to parse the whole PDML file just to get the payloads, so a custom XML parser is used instead. for ca. 90k packets, each trace took around 2 minutes on an i7 laptop without needing too much memory.

To enable re-use of the parsed data without having to parse again the pcap files, the parsed data is stored in pickle format and compressed with bz2.

Result based on the example data included (N3 capture, N6 capture):

5GC visualization of messages over time

Resulting (compressed) Pickle file: UP_example_analysis.pbz2

Comparing User Plane latency

File: plotting_latency_compare.ipynb

Used for comparing latency data of several analyzed user plane captures. Separated from the parsing script so as to avoid re-parsing every time.

Result based on the example data included (based on importing UP_example_analysis.pbz2 and UP_example_analysis_2.pbz2):

5GC visualization of messages over time

HTML version (interactive): latency_comparison.html

Procedure Time

File: plotting_procedure_time.ipynb

Takes as input a CP trace and plots procedure duration:

Note: a limited set of procedures are supported for now. More may be supported over time.

Examples

Help

Run python trace_visualizer.py --help for a list of all available parameters, default values and other things you may need.

Wireshark Portable

The -wireshark option lets you use a specific Wireshark version. The way this works is by using this parameter to generate the path for the tshark and ,if more than one trace is specified, mergecap, call. It is a scripted command, nothing more:

  • OS: no absolute path for tshark is generated. That is, the tshark from the OS's path will be used
  • <version number>: an absolute path location for the tshark executable is generated. The executable is assumed to be located in the following location: wireshark/WiresharkPortable_<version number>.
  • latest: similar to the previous option, this option scans the wireshark folder and out of all of the found folders chooses the one with the highest version number.

Example: WiresharkPortable version 3.4.4 should be placed in a directory named WiresharkPortable_3.4.4.

Do note that "Wireshark Portable" applies to Windows only. In Linux, the same concept applies. Just make sure that the script can find tshark and mergecap where it expects to. That is:

  • tshark: wireshark/WiresharkPortable_<version number>/App/Wireshark/tshark
  • mergecap: wireshark/WiresharkPortable_<version number>/App/Wireshark/mergecap

Note: whether symlinks work is not tested.

5GC trace

Many, many thanks to the free5GC project for providing some 5GC traces we could use to show some examples on how to use the application.

The free5GC is an open-source project for 5th generation (5G) mobile core networks. The ultimate goal of this project is to implement the 5G core network (5GC) defined in 3GPP Release 15 (R15) and beyond.

Please be sure to visit their project website free5GC and their Github repository.

They provided us with the following trace, which we will use to illustrate the examples.

free5GC trace

HTTP/2 trace

While this tool was born with 5GC traces in mind, it turns out to be useful at visualizing HTTP/2 traces. We had this HTTP/2 example because at the beginning we could not find any freely available 5GC traces (they typically contain intra-NF communication and/or proprietary protocol specifics, so they are not easy to come by).

As alternative, we will use the sample HTTP/2 capture from the Wireshark wiki and show you how to use the application with the http2-h2c.pcap file

As shown in Wireshark, the capture should look as shown below:

HTTP/2 capture

The following command converts the Wireshark trace into the SVG diagram shown below give that plantuml.jar and the WiresharkPortable_3.1.0 folder are placed where they should:

python trace_visualizer.py -wireshark "3.1.0" "<file path>\Sample of HTTP2.pcap"

Output screenshot (Link to SVG file)

Adding pod data

Sometimes you would like to group several diagram actors into one (e.g. a pod with multiple calico interfaces) or several pods belonging to one namespace (e.g. belonging to the same NF).

Just use the -pods optional parameter and as parameter use the output of kubectl get pods --all-namespaces -o yaml

e.g. python trace_visualizer.py -pods "<path to YAML file>" -wireshark "3.1.0" "<file path>\Sample of HTTP2.pcap"

The script will now output a pod and namespace version of the SVGs, where the IPs will be replaced with pod names or namespace names respectively.

This allows you to message flows between pods and/or namespaces to have a clearer view of the messaging.

The application currently maps following information found in the kubectl YAML file:

  • namespace association within the metadata elements
  • IP addresses associated to this pod:
    • cni.projectcalico.org/podIP within the annotations metadata element
    • ips elements within the JSON data within k8s.v1.cni.cncf.io/networks-status

The name assigned to the pod is that found under the name element.

In case you only want to generate specific diagram types, you can use -diagrams <diagram types> option, e.g. -diagrams "ip,k8s_pod,k8s_namespace". Supported diagram types:

  • ip: does not use k8s pod information for diagram generation
  • k8s_pod: generates diagrams where IPs are replaced by pod names and intra-pod communication (e.g. different Multus interfaces in a pod) are not shown
  • k8s_namespace: similar to k8s_pod but messages are grouped by namespace

Merging capture files

You may also input not a single capture as input, but a comma-separated list of capture files. In this case, the script will automatically call mergecap and merge the given capture files. This can be useful if you have capture files from e.g. several k8s worker nodes.

python trace_visualizer.py -wireshark "3.1.0" "<file path>\Sample of HTTP2.pcap,<file path>\Sample of another file.pcap"

The same Wireshark version will be used for all of the files for dissection.

Do note that this will only give you a useful output if you time-synchronized the hosts where the captures were taken (nothing to do with this script). Else, you will merge time-shifted captures.

Specifying HTTP/2 ports

Just use the -http2ports ports parameters. E.g. -http2ports "3000,80" tells Wireshark to decode communication on those ports as HTTP/2. Useful if you are using non-standard ports for your communication.

Let us try running python trace_visualizer.py -wireshark latest "doc/free5gc.pcap"

We obtain the following trace diagram: free5GC plain

SVG full diagram here

There seems to be some things missing. That is because the SBI communication will run on varying ports depending on the configuration/deployment. While some ports are used by default, those may not be the ones your deployment are using.

We know from our configuration (or looking at the Wireshark trace) that we have SBI communication on ports 29502, 29503, 29504, 29507, 29509, 29518.

Let's try again now running python trace_visualizer.py -wireshark 3.2.2 -http2ports "29502,29503,29504,29507,29509,29518" -limit 200 "<path_to_trace>\free5gc.pcap" Note: the limit option overrides the default of maximum 100 messages per output SVG file (else PlantUML's Java runtime often runs out of memory and crashes).

The output looks more like a 5GC trace now: free5GC plain

SVG full diagram here

Using several Wireshark versions for decoding

While testing a product under heavy development, you may find the case where some NAS messages follow a certain 3GPP release while some other messages follow another.

This may result in no single Wireshark version capable of decoding all messages. i.e., you will always have some [Malformed packet] payloads shown no matter what version you use.

In order to enable packet decoding using multiple Wireshark versions, use the option -wireshark <comma-separated-list-of-wireshark-versions>.

Example: -wireshark "2.9.0,3.1.0" will use Wireshark 2.9.0 as baseline dissector and the rest, in this case 3.1.0 as alternative. In case a malformed packet is detected for a given packet, the first non-malformed alternative (in this case 3.1.0, you may specify more) will be used instead.

You also have the option to use the OS-installed Wireshark version by using as version string OS. In this case, the script will not generate a full path for the tshark executable but rather a call to subprocess.run() without a full path and only the command itself.

Omitting HTTP/2 headers

It may happen that you have a lot of additional headers and that they make the generated figures less readable. In this case, you can use the ignorehttpheaders option.

Example: -ignorehttpheaders "x-forwarded-for,x-forwarded-proto,x-envoy-internal,x-request-id,x-istio-attributes,x-b3-traceid,x-b3-spanid,x-b3-sampled"

Omits each of the HTTP/2 headers in the list from the generated figures.

Adding additional host labels

It may happen that your system uses a mix of VMs and containers. Or that the mapping for certain IPs is missing. The -openstackservers <path to YAML file> option allows you to set an additional IP mapping for generating labels.

The syntax of the YAML file is chosen so that it is easy to export the data from OpenStack and directly use it as input without further processing.

Any IP found in the fixed field will be mapped to the server label. E.g. messages originating from 192.168.10.2 and 192.168.6.19 IPs will both be shown as originating from the same element, which will be labeled Test system running on VM with several IPs.

Only the labels shown are parsed. Your YAML file may contain additional labels (most probably the case if it is an exported file).

servers:
  'Test system running on VM with several IPs':
    interfaces:
      test:
        fixed:     "192.168.10.2"
      n1_n2:
        fixed:     "192.168.3.19"
      n3:
        fixed:     "192.168.5.19"
      n6:
        fixed:     "192.168.6.19"
      oam:
        fixed:     "192.168.1.19"

The following example servers.yaml file is used to generate the diagram below:

Run python trace_visualizer.py -wireshark 3.2.2 -http2ports "29502,29503,29504,29507,29509,29518" -limit 200 -openstackservers "<path_to_servers.yaml>\servers.yaml" -show_selfmessages True "<path_to_trace>\free5gc.pcap"

Note: self-messages are typically omitted from the generated diagram. since in this case part of the 5GC is running on localhost, the -show_selfmessages True option is used to show self-messages.

free5GC plain

SVG full diagram here

Adding timestamps

There is an option to add relative timestamps to the generated diagrams (e.g. to measure processing time).

Just use the show_timestamp True option, e.g. python trace_visualizer.py -wireshark 3.2.2 -http2ports "29502,29503,29504,29507,29509,29518" -limit 200 -openstackservers "<path_to_servers.yaml>\servers.yaml" -show_selfmessages True -show_timestamp True "<path_to_trace>\free5gc.pcap"

free5GC plain

SVG full diagram here

Showing only certain packets

Do you want to put some pictures in a Wiki or send a diagram to a colleague but there is too much information? There is the option to omit most of the information and also to explicitly show some: -simple_diagrams and -force_show_frames

As an example, we will generate a diagram showing only a couple of NAS messages for PDU session establishment: frames 15 (registration request), 175 (registration complete) and 228 (PDU session establishment accept).

Just use the show_timestamp True option, e.g. python trace_visualizer.py -wireshark 3.2.2 -http2ports "29502,29503,29504,29507,29509,29518" -limit 200 -openstackservers "<path_to_servers.yaml>\servers.yaml" -show_selfmessages True -show_timestamp True -simple_diagrams True -force_show_frames "15,175,228" "<path_to_trace>\free5gc.pcap"

free5GC plain

SVG full diagram here

Sharing an edited trace

Maybe you have a vendor trace but cannot share a diagram because it contains proprietary information? Or have a real trace but you also cannot share it because it contains personal information? (e.g. real IMSIs).

There are some workaround you can use to get around this.

Let us assume that we want to show the information below but the actual IMSIs (imsi-2089300007487) in frames 36, 38 cannot be shown. free5GC plain

Since this application works on an exported PDML file, you can just edit the generated PDML file and remove/edit from there any information you want. As long as the XML is valid, the output will still be generated.

Just search for <field name="num" pos="0" show="36" in the PDML file to go to frame 36 and edit it accordingly.

Note that you do not have to edit the parsed HTTP/2 fields but rather the http2.data.data hex payload. It is cumbersome, but since this application does HTTP/2 frame reconstruction (a data payload can span more than one HTTP/2 frame), it works with the binary payload. Just use a HEX-to-ASCII converter (e.g. here), edit the payload and convert it back to HEX (e.g. here). In this case, we will change the payloads to change imsi-2089300007487 to imsi-XXXXXXXXXXXXX (removed). You can find the edited trace here.

The same for frame 38. The output can be seen below free5GC plain

SVG full diagram here

Editing headers is simpler. To modify the header shown below, free5GC plain

You just need to go to frame 31 and to the <field name="http2.header" showname="Header: :path: . The application uses the show value of each header to generate the diagrams (in this case <field name="http2.header.value"). In this case we changed the value to show="/nudr-dr/v1/subscription-data/imsi-XXXXXXXXXXXXX/authentication-data/authentication-subscription".

The result can be seen below: free5GC plain

Maybe some editing features will be added in the feature, but will depend on whether that is really needed or not.

Ordering labels in a specific order

Just use -force_order, e.g. -force_order "gNB,AMF,SMF,UDM"

Missing HTTP headers and HPACK

A common issue is that a packet capture may have been started after the HPACK header table has been initialized, which leads to missing header entries in the packet capture.

While not really an issue to be solved here, you may find it useful to know that Wireshark does apparently provide a way to inject HTTP2/GRPC headers via uat.

You can find some information regarding HTTP2/GRPC header injection in the related feature request and also GRPC dissector documentation.

If you want to play around with the feature itself, it is available in the GUI also under Preferences->Protocols->HTTP2, where you can find the tables that can be setup via uat.

Proprietary protocol traces

For some use cases, the trace may not come from a direct capture, but it may rather be generated by a tool (e.g. an in-built tap in the 5GC software). In such cases, the protocol stack may not look "normal" (e.g. Ethernet/IP). One such example is shown below (the original trace can be found here):

one_packet.pcapng.

The actual data with which this script works is the exported PDML file, for which for this specific protocol (exported_pdu) looks as follows:

<proto name="exported_pdu" showname="EXPORTED_PDU" size="45" pos="0">
  <field name="exported_pdu.tag" showname="Tag: PDU content dissector name (12)" size="9" pos="0" show="12" value="000c00056874747032">
    <field name="exported_pdu.tag_len" showname="Length: 5" size="2" pos="2" show="5" value="0005"/>
    <field name="exported_pdu.prot_name" showname="Protocol Name: http2" size="5" pos="4" show="http2" value="6874747032"/>
  </field>
  <field name="exported_pdu.tag" showname="Tag: IPv4 Source Address (20)" size="8" pos="9" show="20" value="001400040ace6c41">
    <field name="exported_pdu.tag_len" showname="Length: 4" size="2" pos="11" show="4" value="0004"/>
    <field name="exported_pdu.ipv4_src" showname="IPv4 Src: 10.206.108.65" size="4" pos="13" show="10.206.108.65" value="0ace6c41"/>
    <field name="ip.addr" showname="Source or Destination Address: 10.206.108.65" hide="yes" size="4" pos="13" show="10.206.108.65" value="0ace6c41"/>
    <field name="ip.src" showname="Source Address: 10.206.108.65" hide="yes" size="4" pos="13" show="10.206.108.65" value="0ace6c41"/>
  </field>
  <field name="exported_pdu.tag" showname="Tag: Source Port (25)" size="8" pos="17" show="25" value="001900040000a82e">
    <field name="exported_pdu.tag_len" showname="Length: 4" size="2" pos="19" show="4" value="0004"/>
    <field name="exported_pdu.src_port" showname="Src Port: 43054" size="4" pos="21" show="43054" value="0000a82e"/>
  </field>
  <field name="exported_pdu.tag" showname="Tag: IPv4 Destination Address (21)" size="8" pos="25" show="21" value="001500040ace6c5c">
    <field name="exported_pdu.tag_len" showname="Length: 4" size="2" pos="27" show="4" value="0004"/>
    <field name="exported_pdu.ipv4_dst" showname="IPv4 Dst: 10.206.108.92" size="4" pos="29" show="10.206.108.92" value="0ace6c5c"/>
    <field name="ip.addr" showname="Source or Destination Address: 10.206.108.92" hide="yes" size="4" pos="29" show="10.206.108.92" value="0ace6c5c"/>
    <field name="ip.dst" showname="Destination Address: 10.206.108.92" hide="yes" size="4" pos="29" show="10.206.108.92" value="0ace6c5c"/>
  </field>
  <field name="exported_pdu.tag" showname="Tag: Destination Port (26)" size="8" pos="33" show="26" value="001a000400001b9e">
    <field name="exported_pdu.tag_len" showname="Length: 4" size="2" pos="35" show="4" value="0004"/>
    <field name="exported_pdu.dst_port" showname="Dst Port: 7070" size="4" pos="37" show="7070" value="00001b9e"/>
  </field>
  [...]
</proto>

For such cases, the following options can be used:

  • custom_packet_filter: Originally, this script only considers frames in the capture file that contain IPv4 or IPv6 protocols. This is done by filtering out packets not matching the packet.findall("proto[@name='ip']") or packet.findall("proto[@name='ipv6']") XPath expressions. The string you set in this parameter will additionally use the proto[@name='{custom_packet_filter}'] filter, e.g. exported_pdu.
  • custom_ip_src: An XPath expression pointing to an element from which the source IP address can be extracted, e.g. field[@name='exported_pdu.ipv4_src']
  • custom_ip_src_attribute: While custom_ip_src selects the element from which the IP source address can be extracted, custom_ip_src_attribute points to the attribute within the element containing the actual text you want to use as label, e.g. show results in 10.206.108.65 being shown and showname in IPv4 Src: 10.206.108.65
  • custom_ip_dst: Sames as with custom_ip_src, e.g. field[@name='exported_pdu.ipv4_dst']
  • custom_ip_dst_attribute: Same as with custom_ip_src_attribute, e.g. show

For this specific example, the following call can be used:

python trace_visualizer.py -wireshark 4.0.5 -limit 70 -show_timestamp True -custom_packet_filter "exported_pdu" -custom_ip_src "field[@name='exported_pdu.ipv4_src']" -custom_ip_dst "field[@name='exported_pdu.ipv4_dst']" -custom_ip_src_attribute "show" -custom_ip_dst_attribute "show" "<path>\one_packet.pcapng"

Which generates the following output: one_packet_diagram

Do note that in this case, the NAS protocol is shown because Wireshark did indeed decode the NAS message in the MIME multipart payload.

For traces where the data was not decoded, such as this one, the decoded protocol is not shown.

In this specific example, Wireshark could not detect the multipart messages because the header with the boundary information was compressed with HPACK and the table entry was not present in the capture. While for JSON the formatting is done automatically (just some pretty formatting, after all), for binary protocols such as JSON, no decoding is implemented here.

python trace_visualizer.py -wireshark 4.0.5 -http2ports "65413,65428,65438,65440,65457,65462,65495,65482,65501,65504,65512,65514,65521,65528,31382,8080,34385" -show_timestamp True "<path>\Service Request Connected_205_210.pcap"

Service Request Connected_205_210

Notes

There may be some issues with HTTP/2 frame fragment reconstruction, so drop me a line if you find some issues.

For MIME Multipart messages that are not JSON, the diagrams show the binary content in hex form and (if Wireshark dissectors decoded the data), any present decoded protocol.

More Repositories

1

scale

Scale is the digital design system for Telekom products and experiences.
TypeScript
372
star
2

das-schiff

This is home of Das Schiff - Deutsche Telekom Technik's engine for Kubernetes Cluster as a Service (CaaS) in on-premise environment on top of bare-metal servers and VMs.
351
star
3

create-tsi

Create-tsi is a generative AI RAG toolkit which generates AI Applications with low code.
TypeScript
227
star
4

tel-it-security-automation

Deutsche Telekom IT GmbH (DevSecOps Team): Project for Security & Compliance Automation
Shell
55
star
5

Open-Telekom-Integration-Platform

A cloud-native, ubiquitous enterprise integration platform.
38
star
6

oslic

(Telekom) Open Source License Compendium
TeX
32
star
7

das-schiff-network-operator

Configure netlink interfaces, simple eBPF filters and FRR using Kubernetes resources.
Go
28
star
8

testerra

Testerra is an integrated framework for automating tests for (web) applications.
Java
26
star
9

netplanner

Netplanner is netplan.io compatible CLI which adds more netdev devices.
Python
22
star
10

LibNbiot

Non-blocking MQTT-SN Library for NB-IoT
C++
20
star
11

lazy-imports

Python tool to support lazy imports.
Python
20
star
12

voice-skill-sdk

The official Magenta Voice Skill SDK used to develop skills for the Magenta Voice Assistant using Voice Platform!
Python
19
star
13

HPOflow

Tools for Optuna, MLflow and the integration of both.
Python
18
star
14

mp-dccp

Out-of-tree Linux Kernel implementation of MultiPath DCCP
C
18
star
15

style-doc

Black for Python docstrings and reStructuredText (rst).
Python
16
star
16

wikipedia-22-12-de-dpr

German dataset for DPR model training
Jupyter Notebook
16
star
17

OpenAPI-Dissector

This repository contains experimental code for generating an OpenAPI dissector for use within Wireshark.
Lua
15
star
18

ki-in-schulen

AI@School - Wissen zu kĂźnstlicher Intelligenz spielerisch in die Schulen bringen
Python
12
star
19

design-tokens

The source of truth for designing Telekom-branded digital products
JavaScript
12
star
20

phonenumber-normalizer

With the phonenumber-normalizer library, you can normalize phone numbers to the E164 format and national format, even if national destination code is optional.
Java
12
star
21

SmartCredentials-SDK-android

An SDK and Library that is used in several Deutsche Telekom mobile apps
Java
11
star
22

oscad

The Open Source Compliance Advisor is the interactive version of the OSLiC for enabling its requestors to use open source software compliantly.
PHP
11
star
23

mltb2

Machine Learning Toolbox 2
Python
10
star
24

cgjprofile

Command Line Tool for macOS to analyze the validity of mobile provision files
Swift
10
star
25

k8s-edge-scheduler

custom kubernetes scheduler for placing pods based on location data
Go
10
star
26

mb-netmgmt

Network Management Protocols for Mountebank
Python
10
star
27

sysrepo-plugin-interfaces

C
10
star
28

bdd-web-app

Behavior-driven tests for web applications. Use proven patterns for your test project. You can write the executable specifications in Cucumber and JBehave and benefit from years of web-application test automation experience.
Java
10
star
29

aml-jens

JENS - a tool to simulate L4S marking of a Baseband Unit
Python
8
star
30

pixelshades

TypeScript
7
star
31

canary-bot

Measurement of network status information of distributed systems in an HTTP-based communicating mesh.
Go
7
star
32

nuSIM-Loader-Application

Reference implementation of the nuSIM loader (nuSIM Loader Application) to request, receive, store and load encrypted nuSIM profiles in accordance with nuSIM specifications.
JavaScript
7
star
33

task4java

task4java is a framework to compose and execute code asynchronously.
Java
6
star
34

SmartCredentials-SDK-ios

Objective-C
6
star
35

sysrepo-plugin-system

C
6
star
36

voice-cli

A command line client to interact with Telekom Voice Platform. It can be used to invoke skills/intents of the "Hallo Magenta!" Voice Assistant and is a helper tool for testing the Voice Skill SDK repository. https://github.com/telekom/voice-skill-sdk
Java
6
star
37

das-schiff-operator

A collection of custom controllers for Das Schiff, @telekom's Kubernetes engine.
Go
5
star
38

JSON-Filter

JSON filter is a small, lightweight filter-library that allows to evaluate JSON payload against a filter consisting of operators.
Java
5
star
39

transformer-tools

Transformers Training Tools
Python
5
star
40

nlu-bridge

Python
4
star
41

TPNS_iOS

Objective-C
4
star
42

hackmobile

Materials of and for the Hackaton at T-Mobile Austria
4
star
43

census-income-lightgbm

Demo to show the interaction between LightGBM, Optuna and HPOflow.
Jupyter Notebook
4
star
44

testerra-xray-connector

testerra-xray-connector is used for synchronizing test results with Xray plugin for Atlassian Jira.
Java
4
star
45

dt-kea-netconf

This is a fork of https://gitlab.isc.org/isc-projects/kea
C++
3
star
46

syncplus

SyncPlus: An app for syncing your contacts, calendars, and tasks
Kotlin
3
star
47

geo-test-ios

A project to test the accuracy of iOS geofence and visits monitoring.
Swift
3
star
48

pubsub-horizon-vortex

Connector used by Horizon for transferring and transforming data from Kafka to MongoDB
Go
3
star
49

pubsub-horizon

Horizon is a cloud intermediary which handles the asynchronous communication between systems through well defined messages ('events')
3
star
50

tcptrace

C
2
star
51

llm_evaluation_results

LLM evaluation results
Jupyter Notebook
2
star
52

pubsub-horizon-galaxy

Horizon component for processing published events and performing demultiplexing and filtering tasks
Java
2
star
53

mrcs

The goal of this project is to combine the ROS1 ( Robot Operating System ) framework with the messaging protocol MQTT to enable ROS1 for multi robot cloud services.
C++
2
star
54

SmartCredentials-SDK-react

Demo for integrating Smart-Credentials into React Native applications
Java
2
star
55

tedelib

Telekom Design Library
2
star
56

docker-registry-untagger

Go
2
star
57

lightgbm-tools

Tools for LightGBM
Python
2
star
58

speechalyzer

A Tool For Managing Speech Sata
Java
2
star
59

testerra-cucumber-connector

testerra-cucumber-connector is used for running Cucumber tests with Testerra.
Java
2
star
60

telekom.github.io

the telekom organization homepages
HTML
2
star
61

ml-cloud-tools

ML Tools for the Cloud
Python
2
star
62

psikeds

ps induced knowledge entity delivery system
Java
2
star
63

cluster-api-ipam-provider-infoblox

An IPAM provider for Cluster API that allocates IP addresses from Infoblox.
Go
2
star
64

sysrepo-plugin-hardware

C++
2
star
65

testerra-skeleton

testerra-skeleton shows you the basics of a test automation project using Testerra.
Java
2
star
66

sysrepo-library-robot-framework

The goal of this project is to provide a way to use Sysrepo with the Robot Framework.
Python
2
star
67

testerra-appium-connector

testerra-appium-connector combines Testerra and Appium to execute tests on mobile devices.
Java
2
star
68

aifoundation-docs

Documentation for the T-Systems AI Foundation LLM and RAG services
JavaScript
2
star
69

pubsub-horizon-spring-parent

Parent Spring library required to build Horizon components
Java
2
star
70

testerra-hpqc-connector

testerra-hpqc-connector is used for synchronizing test results with MicroFocus ALM/Quality Center.
Java
2
star
71

sysrepo-plugins-common

Set of utilities/functionalities which can be used for easier build of sysrepo plugins.
C
2
star
72

gateway-jumper

Jumper is a cloud-native scalable API Gateway expected to run as a sidecar of Kong API Gateway.
Java
2
star
73

pubsub-horizon-quasar

A service for synchronizing the state of custom resources with caches or databases 🌌
Go
2
star
74

pubsub-horizon-go

A library of code shared between newly written Go components within the Horizon ecosystem 📚
Go
2
star
75

spider-scrape

Python
1
star
76

pubsub-horizon-starlight

Horizon component for publishing event messages
Java
1
star
77

pubsub-horizon-polaris

Horizon component for handling circuit breaker and republishing functionality
Java
1
star
78

pubsub-horizon-pulsar

Horizon component for event message delivery via SSE
Java
1
star
79

pubsub-horizon-comet

Horizon component for event message delivery via callback request
Java
1
star
80

pxt-ki-in-schulen

Makecode-Modul fĂźr das Hauptprojekt github.com/telekom/ki-in-schulen
C++
1
star
81

tunprox

Encapsulation framework for (MP-)DCCP
C
1
star
82

gateway-kong-charts

Chart to setup the Gateway component with the appropriate configuration.
Smarty
1
star
83

gateway-issuer-service

Issuer-Service enables customers to validate Oauth2 tokens issued by the related Gateway.
Java
1
star
84

u-dccp

C
1
star
85

GerTwinOrca

1
star
86

magentacloud-business

1
star
87

testerra-azure-devops-connector

testerra-azure-devops-connector is used for synchronizing test results with Microsoft Azure Devops platform.
Java
1
star
88

testerra-selenoid-connector

Testerra Selenoid Connector is used for accessing video files created by a Selenoid instance.
Java
1
star
89

testerra-teamcity-connector

testerra-teamcity-connector is used for manipulating the results and information of Jetbrains TeamCity.
Java
1
star
90

reuse-template

A template repository for licensing using https://reuse.software/
1
star
91

ger-rag-benchmark

1
star
92

3gpp-meeting-tools

A set of tools to ease the execution of 3GPP meetings
HTML
1
star
93

yang-imp-fuzzer

This project contains a fuzzer that parses a YANG module, connects to a NETCONF server and then walks through the YANG model data, generating random data and sending it to the NETCONF server, to test the validity of the model implementation.
Python
1
star
94

pubsub-horizon-helm-charts

Collection of Helm Charts required for the deployment of Horizon
Smarty
1
star
95

ServiceEdgeApi

Sharing a Service Edge API developed by Deutsche Telekom
1
star
96

sysrepo-augeas

Converter of augeas lenses to YANG modules and sysrepo plugins for their handling.
C
1
star
97

testerra-legacy

Testerra modules that are no longer being actively maintained.
JavaScript
1
star
98

sysrepo-plugin-os-metrics

C++
1
star
99

tsms-api-sdk-lib

Library for a client system which shall use the service of the DT trusted management system
1
star