大家好,又见面了,我是你们的朋友全栈君。
论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)
Benjamin Timon
eShard, Singapore
基础概念
Non-profiling attacks:
假设攻击者只能从目标设备获取跟踪。例如:
Differential Power Analysis (DPA), Correlation Power Analysis (CPA) , or Mutual Information Analysis (MIA).
Profiling attacks:
假定攻击者拥有与目标设备相同的可编程设备。例如:
Template Attacks, Stochastic attacks or Machine-Learning-based attacks.
1.在分析阶段:用收集的侧通道迹,对所有可能密钥值k∈K进行泄漏分析。
2.攻击阶段:基于泄漏分析对侧通道迹进行分类,恢复密钥值k∗。
points of interest(POI): 兴趣点,即侧通道迹中的泄漏点,用POI进行分类。
Deep-Learning(DL): MLP,CNN
de-synchronized: 去同步,未对齐
Contribution
- 提出Differential Deep Learning Analysis (DDLA) .
- 着重实现Sensitivity Analysis in a Non-Profiled context.
DDLA
攻击算法
AES
DDLA具体算法
实验硬件&框架
- PC:64 GB of RAM, a GeForce GTX 1080Ti GPU & two Intel Xeon E5-2620 v4 @2.1GHz CPUs.
- ChipWhisperer-Lite(CW) ;Atmel XMEGA128 chip;
ASCAD(collected from an 8-bit ATMega8515 board). - MLPexp & CNNexp.
实验参数
1 Trainings parameters and details
1.1 Loss function
Mean Squared Error (MSE) loss function for all experiments.
1.2 Accuracy
The accuracy was computed as the proportion of samples correctly classified.
1.3 Batch size
A batch size of 1000 was used for all experiments.
1.4 Learning rate
For all experiements we used a learning rate of 0.001.
1.5 Optimizer
We used the Adam optimizer with default configuration (β~1~ = 0.9, β~2~ = 0.999, ε = 1e-08,no learning rate decay).
1.6 Input normalization
We normalize the input traces by removing the mean of the traces and scaling the traces between -1 and 1.
1.7 Labeling
• For all simulations (first and high-order), we used the MSB labeling.
• For attacks on the unprotected CW and on ASCAD, we used the LSB labeling.
• For the attack on the CW with 2 masks, we used the MSB labeling.
1.8 Deep Learning Framework
We used PyTorch 0.4.1.
2 Networks architectures
Deeping Learning Book
Deep learning website
2.1 MLPsim
• Dense hidden layer of 70 neurons with relu activation
• Dense hidden layer of 50 neurons with relu activation
• Dense output layer of 2 neurons with softmax activation
2.2 CNNsim
• Convolution layer with 8 filters of size 8 (stride of 1, no padding) with relu activation.
• Max pooling layer with pooling size of 2.
• Convolution layer with 4 filters of size 4 (stride of 1, no padding) with relu activation.
• Max pooling layer with pooling size of 2.
• Dense output layer of 2 neurons with softmax activation
2.3 MLPexp
• Dense hidden layer of 20 neurons with relu activation
• Dense hidden layer of 10 neurons with relu activation
• Dense output layer of 2 neurons with softmax activation
2.4 CNNexp
• Convolution layer with 4 filters of size 32 (stride of 1, no padding) with relu activation.
• Average pooling layer with pooling size of 2.
• Batch normalization layer
• Convolution layer with 4 filters of size 16 (stride of 1, no padding) with relu activation.
• Average pooling layer with pooling size of 4.
• Batch normalization layer
• Dense output layer of 2 neurons with softmax activation
Experiment & result
CNN-DDLA against de-synchronized traces
Non-Profiled;N = 3, 000 未对齐侧信道迹;CNNexp .
(文章用了MLPexp和CPA,未能成功恢复)
High-Order DDLA simulations (掩码技术泄露处未知): MLPsim
N = 5, 000 traces,1 mask ;
• n = 50 samples per trace.
• leakage: t = 25 ;Sbox leakage:Sbox(di ⊕ k∗) ⊕ m1 + N (0, 1) ,di & m1 are random | k∗ a fixed key byte.
leakage:t = 5 and defined as m1 + N (0, 1).
• All other points on the traces are random values in [0; 255]
N = 10, 000 traces, 2 masks.;
leakage:t = 45;Sbox leakage:Sbox(di ⊕ k∗) ⊕ m1 ⊕ m2 + N (0, 1).
Second order DDLA on ASCAD
16-byte fixed key;plaintexts & masks are random;ASCAD profiling set;MLPexp on the first 20, 000 traces;ne = 50 epochs per guess
Third order DDLA on ChipWhisperer
N = 50, 000 traces;n = 150 samples with mask m1,m2;masked Sbox:Sbox(d ⊕ k∗) ⊕ m1 ⊕ m2.
MLPexp network;ne = 100 epochs per guess;
reveal key after around 20 epochs per guess;without any leakages combination pre-processing nor any assumptions about the masking method.
这些是用DL训练每个bit的密钥,最后检查矩阵;
但是以某部分epochs为单位检查矩阵,直到某一步骤可以恢复密钥,可以减少复杂度。
发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/130352.html原文链接:https://javaforall.cn
【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛
【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...