• Stars
    star
    174
  • Rank 219,174 (Top 5 %)
  • Language
    Python
  • License
    Other
  • Created almost 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Side Channels Analysis and Deep Learning

About

ASCAD (ANSSI SCA Database) is a set of databases that aims at providing a benchmarking reference for the SCA community: the purpose is to have something similar to the MNIST database that the Machine Learning community has been using for quite a while now to evaluate classification algorithms performance.

This repository provides scripts and Deep Learning models that demonstrate the efficiency of Deep Learning for SCA.

Several databases are available, depending on the underlying implementation and architecture. More information is available in the corresponding folders:

Copyright and license

Copyright (C) 2021, ANSSI and CEA

The databases, the Deep Learning models and the companion python scripts of this repository are placed under the BSD licence. Please check the LICENSE file for more information.

Getting the ASCAD databases and the trained models

Quick start guide

The scripts and the data are split in two places mainly because git is not suited for large files.

In order to get everything up and running, here are the steps to follow (we provide the steps using a Unix shell syntax, but you can adapt this and use your favorite shell of course):

  1. Clone the current repository to get the scripts:
git clone https://github.com/ANSSI-FR/ASCAD.git
  1. Click on the link corresponding to the chosen campaign and follow the instructions to download and unpack the database.
Implementation Campaign Type Link
ATMEGA boolean masked AES fixed key Power (Icc) link
ATMEGA boolean masked AES variable key Power (Icc) link
STM32 affine masked AES variable key Power (Icc) link
  1. Install the last version of Tensorflow 2 for your platform. Some versions of Tensorflow require a specific version of CUDA, up-to-date and detailed information on the installation can be found here: https://www.tensorflow.org/install. You will also need Keras as the Tensorflow front end API companion.

  2. Now you should be able to use the provided python scripts. If you have the pip Python package manager installed, getting the scripts dependencies is as simple as:

pip install numpy h5py matplotlib tqdm

Our scripts now rely on Tensorflow 2, therefore we only support Python 3. If you want to continue to use Python 2, please refer to a previous version of the repository, for example checkouting commit 30f65bb. However you will not be able to run the scripts on our recent databases (STM32_AES_v2 and later).

ASCAD companion scripts

Required Python packages

In order to use ASCAD companion scripts, here is the list of dependencies that need to be installed in your python setup:

Note that these libraries are generally packaged in most of Linux distributions, and/or are available through the pip Python package manager. The case of the tensorflow library is a bit special since depending on the target platform, CPU or GPU acceleration may be configured and used or not. For ASCAD scripts, we strongly suggest (specifically for the profiling/training phase) to use a GPU backed configuration. Configuring tensorflow and GPU acceleration won't be detailed here: please refer to this and this resources for more details on the topic (you will also certainly need to handle Nvidia CUDA drivers and libraries for you platform).

Finally, please note that the scripts only work with Python 3 since we rely on tensorflow 2.

We propose hereafter a high-level description of the proposed scripts. The default parameters of these scripts vary with the downloaded campaign, and are provided as a file in the corresponding folders:

Implementation Campaign Platform Link
ATMEGA boolean masked AES fixed key Linux link
ATMEGA boolean masked AES variable key Linux link
STM32 affine masked AES variable key Linux link

Every script can be launched using the corresponding parameter file:

$ python ASCAD_generate.py path_to_parameters_file
$ python ASCAD_train_models.py path_to_parameters_file
$ python ASCAD_test_models.py path_to_parameters_file

It is easy to run these scripts on custom parameters, either by modifying the default values within the script, or by creating a new parameter file.

ASCAD generation

The ASCAD_generate.py script is used to generate ASCAD databases from any of the available raw traces database.

This script takes as an argument the name of a file containing a python dict with the following keys:

  • traces_file: this is the file name of the HDF5 raw traces with metadata database. Use this argument if all the traces are contained in a single file.
  • (optional) files_splitted : set this option to 1 if the HDF5 raw traces are splitted in different files.
  • (optional) traces_files_list : this is the list of the HDF5 raw traces if the files_splitted option is set to 1. In this case traces_file argument is no more required.
  • (optional) multilabel : set this option to 1 to get a multiclassified dataset in the same vein that ASCADv2 attacks.
  • labeled_traces_file: this is the name of the HDF5 output file.
  • profiling_index: this is a list corresponding to the index of profiling traces.
  • attack_index: this is a list corresponding to the index of attack traces.
  • target_points: this is the list of points of interest to extract from the traces.
  • profiling_desync: this is the maximum desychronization applied to the profiling original traces, following uniformly randomly chosen values below this maximum for each trace.
  • attack_desync: this is the maximum desychronization applied to the attack original traces, following uniformly randomly chosen values below this maximum for each trace.

The labelize and multilabelize functions are also of interest in the script: tuning it enables to generate databases that focus on other leaking spots of the masked AES (say byte 5 of the first round, byte 10 of the second round, and so on ...).

By tuning all these parameters, one is able to generate multiple ASCAD databases specialized in various values of interest, with customized desynchronization as well as customized profiling and attacking traces.

Testing the trained models

The trained models can be tested using the ASCAD_test_models.py script.

The script computes the ranking of the real key byte among the 256 possible candidate bytes depending on the number of attack traces the trained model takes as input for prediction: this is a classical classification algorithm efficiency check in SCA (see the article "Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database" for a more formal definition of the keys ranking). The evolution of the rank with respect to the number of traces is plotted using matplotlib.

This script takes as an argument the name of a file containing a python dict with the following keys:

  • model_file: this is an already trained model HDF5 file.
  • ascad_database: this is an ASCAD database one wants to check the trained model on.
  • num_traces: this is the maximum number of traces to process.
  • (optional) simulated_key : if the part of the dataset used during the attack step has not a constant key, this option simulates a constant key equal to 0 when the rank is computed (the new plaintext is equal to the previous plaintext xor the current key).
  • (optional) target_byte : this is the index of the target byte during the attack. Default value is equal to 2 for ASCADv1 retrocompatiblity.
  • (optional) multilabel : perform a multilabel attack by recombining the probabilty to compute Pr(Sbox|t). If set to 1, the permindices of the shuffling are taken into account in the recombination. If set to 2, the permindices are not taken into account. If set to 0 (default value), it performs a single label computation.
  • (optional) save_file : if specified, it saves the plot in a file with name save_file.

Training the models

We provide the ASCAD_train_models.py script in order to train the models. This script takes as an argument the name of a file containing a python dict with the following keys:

  • ascad_database: this is an ASCAD database one wants to use for the model training.

  • training_model: this is the HDF5 file where the trained model is scheduled to be saved.

  • network_type: this is the type of network of the model. Currently, three types of model are supported by the script:

    1. mlp: this is the multi-layer perceptron topology described in ASCAD paper;

    2. cnn: this is the convolutional neural network topology described in ASCAD paper;

    3. cnn2: this is the convolutional neural network topology described in ASCAD paper adapted to the format of the traces in the "variable key" campaign link.

    4. multi_resnet: this is the multiclassification ResNet model described during the "GDR SoC2 et Sécurité informatique" (video). This model requires the knowledge of the permutation indices of the shuffling operation during the training step.

    5. multi_resnet_without_permind: this is the multiclassification ResNet model described during the "GDR SoC2 et Sécurité informatique" (video). This model does not take the shuffling into account.

  • epochs: this is the number of epochs used for the training.

  • batch_size: this is the size of the batch used for training.

  • (optional) train_len: this is the number of traces of the training dataset that are used to train the model. This number shall be less than the total number of traces in the training dataset.

  • (optional) validation_split: this is the fraction of the training dataset to use as a validation dataset during the training step.

  • (optional) multilabel: this option shall be set to a non null value when a multilabel model is trained. Set this value to 1 to train the multi_resnet model, and to 2 to train the multi_resnet_without_permind model.

  • (optional) early_stopping: if this option is set to a non null value, the model is trained with an early stopping strategy on the validation cross-entropy. The size of the validation dataset is controlled with the validation_split option. If the validation_split option was not previously set, then its default value is equal to 10 pcts of the training dataset.

More Repositories

1

AD-control-paths

Active Directory Control Paths auditing and graphing tools
C
650
star
2

rust-guide

Recommendations for secure applications development with Rust
Shell
591
star
3

ADTimeline

Timeline of Active Directory changes with replication metadata
PowerShell
466
star
4

bmc-tools

RDP Bitmap Cache parser
Python
465
star
5

polichombr

Collaborative malware analysis framework
Python
373
star
6

MLA

Multi Layer Archive - A pure rust encrypted and compressed archive file format
Rust
322
star
7

SecuML

Machine Learning for Computer Security
Python
271
star
8

libecc

Library for elliptic curves cryptography
C
258
star
9

DFIR-O365RC

PowerShell module for Office 365 and Azure log collection
PowerShell
240
star
10

ORADAD

Outil de récupération automatique des données de l'Active Directory / Automated tool for dumping Active Directory data
C++
215
star
11

SmartPGP

SmartPGP is a JavaCard implementation of the OpenPGP card specifications
Java
178
star
12

cry-me

CRY.ME (CRYptographic MEssaging application)
Kotlin
168
star
13

ultrablue

User-friendly Lightweight TPM Remote Attestation over Bluetooth
Kotlin
163
star
14

ctf

Epreuves de sélection de la TeamFR pour l'ECSC 2019.
Python
156
star
15

AD-permissions

Active Directory permissions (ACL/ACE) auditing tools
PHP
144
star
16

DFIR4vSphere

Powershell module for VMWare vSphere forensics
PowerShell
138
star
17

tabi

BGP Hijack Detection
Python
109
star
18

bootcode_parser

A boot record parser that identifies known good signatures for MBR, VBR and IPL.
Python
97
star
19

SysvolExplorer

Active Directory Group Policy analyzer
C++
94
star
20

Binacle

Full-bin indexation of binary files
Rust
92
star
21

audit-radius

A RADIUS authentication server audit tool
Python
79
star
22

AnoMark

Algorithme d'apprentissage statistique permettant de créer un modèle sur les lignes de commandes des évènements "Création de Processus", afin de détecter des anomalies dans les évènements futurs
Python
78
star
23

bits_parser

Extract BITS jobs from QMGR queue and store them as CSV records
Python
73
star
24

route_leaks

BGP Route Leaks Detection
Python
69
star
25

transdep

Discover SPOF in DNS dependency graphs
Go
67
star
26

SecAESSTM32

Bibliothèque C et assembleur permettant le chiffrement/déchiffrement AES-128 de messages pour des composants grand public (famille STM32F3/STM32F4)
C
67
star
27

x509-parser

a RTE-free X.509 parser
C
58
star
28

guide-journalisation-microsoft

Guide journalisation Microsoft
PowerShell
56
star
29

mabo

MRT Parser
OCaml
46
star
30

chipsec-check

Tools to generate a Debian Linux distribution with chipsec to test hardware requirements
Shell
45
star
31

lidi

Transfer a raw TCP or Unix stream or files through a unidirectional link with forward error correction
Rust
44
star
32

shovel

Web interface to explore Suricata EVE outputs
JavaScript
38
star
33

picon

Picon
C
38
star
34

nogaxeh

Tools for analyzing hexagon code
C++
38
star
35

secAES-ATmega8515

Secure AES128 Encryption Implementation for ATmega8515
Assembly
34
star
36

OVALI

Generic graph exploration, manipulation and visualization tool (Outil de Visualisation et Analyse de Liens Inter-objets)
JavaScript
34
star
37

Open-ISO7816-Stack

This project aims to provide an open-source implementation of the ISO7816-3 communication protocol from the reader side. This protocol is ruling the interactions between a smartcard and a card-reader when using its contacts to communicate
C
33
star
38

packetweaver

A Python framework for script filing and task sequencing
Python
26
star
39

IPECC

A VHDL IP for ECC (Elliptic Curve Cryptography) hardware acceleration
VHDL
26
star
40

ADCP-DirectoryCrawler

AD-control-paths LDAP submodule
C
18
star
41

WAAD

Détection d'anomalie à partir des journaux d'authentification Windows
Python
17
star
42

cardstalker

CardStalker provides a UART-driven smartcard reader at the T=1 (see ISO7816-3) level (link and physical layer), where most of the smartcard reader devices on the market are only providing an APDU interface (application layer).
C
17
star
43

sftp2misp

Automation script to download JSON MISP files from a SFTP server and import them via API to a MISP instance.
Python
15
star
44

mdbook-checklist

mdbook preprocessor for generating checklists and indexes
Rust
13
star
45

cornetto

Outil de gestion de version statique de site web
JavaScript
10
star
46

pciemem

Linux kernel module for driving an USB3380 board, exposing a /dev/pciemem device node on the analysis machine representing the physical memory of the machine under test
C
10
star
47

ORADAZ

Outil de récupération automatique des données AZure / Automated tool for dumping Azure configuration data
Rust
10
star
48

xsvgen

XML Schema Validator Generator
OCaml
10
star
49

ProTIP

ProTIP permet de caractériser la connectivité réelle entre composants d'une architecture PCI Express
Prolog
9
star
50

coq-prelude

General-purpose monad typeclass hierarchy for Coq
Coq
8
star
51

scep

Security Contexts for Enhanced Protection Linux Security Module
C
7
star
52

libdrbg

A portable library implementing NIST SP 800-90A DRBGs
C
7
star
53

Faults_analyzer

Logiciel d'analyse de campagnes de perturbations sur composants
Python
6
star
54

hackropole-hugo

A Hugo theme to host Capture-The-Flag (CTF) challenges as a static website like hackropole.fr.
HTML
6
star
55

caradoc

A PDF parser and validator
6
star
56

WSUS_Audit

Auditing scripts for WSUS infrastructures
6
star
57

DroidWorks

Rust
5
star
58

libapn

libapn is a header-based C++ library developed to study vectorial Boolean functions, including but not limited to APN functions.
C++
5
star
59

pycrate

A Python library to ease the development of encoders and decoders for various protocols and file formats; contains ASN.1 and CSN.1 compilers.
5
star
60

ADCP-libdev

AD-control-paths libraries submodule
C
5
star
61

caml-crush

Caml Crush: an OCaml PKCS#11 filtering proxy
3
star
62

Faults_experiments

Résultats bruts de campagnes de perturbation de composants réalisées par le laboratoire de sécurité des composants de l'ANSSI
Python
2
star
63

eurydice

A user-friendly solution to transfer files through a physical diode using the Lidi utility, complete with data retention, file history, user accounts and admin management. Provides a scriptable API and a web interface.
2
star
64

DECODE

Malware detection tool for Windows PE files based on DFIR ORC data
Python
2
star
65

scantru

Non-Profiled Side Channel Analysis on NTRU
Jupyter Notebook
1
star
66

concerto

Toolset to analyse TLS datasets
1
star
67

opkcs11-tool

opkcs11-tool: managing and operating PKCS #11 security tokens in OCaml
1
star