论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)论文笔记:Non-ProfiledDeepLearning-basedSide-ChannelattackswithSensitivityAnalysis(DDLA)BenjaminTimoneShard,SingaporeContribution

大家好,又见面了,我是你们的朋友全栈君。

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

Benjamin Timon
eShard, Singapore

基础概念

Non-profiling attacks:
假设攻击者只能从目标设备获取跟踪。例如:
Differential Power Analysis (DPA), Correlation Power Analysis (CPA) , or Mutual Information Analysis (MIA).

Profiling attacks:
假定攻击者拥有与目标设备相同的可编程设备。例如:
Template Attacks, Stochastic attacks or Machine-Learning-based attacks.
1.在分析阶段:用收集的侧通道迹,对所有可能密钥值k∈K进行泄漏分析。
2.攻击阶段:基于泄漏分析对侧通道迹进行分类,恢复密钥值k

points of interest(POI): 兴趣点,即侧通道迹中的泄漏点,用POI进行分类。

Deep-Learning(DL): MLP,CNN

de-synchronized: 去同步,未对齐

Contribution

  • 提出Differential Deep Learning Analysis (DDLA) .
  • 着重实现Sensitivity Analysis in a Non-Profiled context.

DDLA

攻击算法
AES

DDLA具体算法
在这里插入图片描述

实验硬件&框架

  • PC:64 GB of RAM, a GeForce GTX 1080Ti GPU & two Intel Xeon E5-2620 v4 @2.1GHz CPUs.
  • ChipWhisperer-Lite(CW) ;Atmel XMEGA128 chip;
    ASCAD(collected from an 8-bit ATMega8515 board).
  • MLPexp & CNNexp.

实验参数
1 Trainings parameters and details

1.1 Loss function
	Mean Squared Error (MSE) loss function for all experiments.
1.2 Accuracy
	The accuracy was computed as the proportion of samples correctly classified.
1.3 Batch size
	A batch size of 1000 was used for all experiments.
1.4 Learning rate
	For all experiements we used a learning rate of 0.001.
1.5 Optimizer
	We used the Adam optimizer with default configuration (β~1~ = 0.9, β~2~ = 0.999, ε = 1e-08,no learning rate decay).
1.6 Input normalization
	We normalize the input traces by removing the mean of the traces and scaling the traces between -1 and 1.
1.7 Labeling
	• For all simulations (first and high-order), we used the MSB labeling.
	• For attacks on the unprotected CW and on ASCAD, we used the LSB labeling.
	• For the attack on the CW with 2 masks, we used the MSB labeling.
1.8 Deep Learning Framework
	We used PyTorch 0.4.1.

PyTorch

2 Networks architectures
Deeping Learning Book
Deep learning website

2.1 MLPsim
	• Dense hidden layer of 70 neurons with relu activation
	• Dense hidden layer of 50 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.2 CNNsim
	• Convolution layer with 8 filters of size 8 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Convolution layer with 4 filters of size 4 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Dense output layer of 2 neurons with softmax activation

2.3 MLPexp
	• Dense hidden layer of 20 neurons with relu activation
	
	• Dense hidden layer of 10 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.4 CNNexp
	• Convolution layer with 4 filters of size 32 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 2.
	• Batch normalization layer
	
	• Convolution layer with 4 filters of size 16 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 4.
	• Batch normalization layer
	• Dense output layer of 2 neurons with softmax activation

Experiment & result

CNN-DDLA against de-synchronized traces

Non-Profiled;N = 3, 000 未对齐侧信道迹;CNNexp .
(文章用了MLPexp和CPA,未能成功恢复)

High-Order DDLA simulations (掩码技术泄露处未知): MLPsim

N = 5, 000 traces,1 mask ;
• n = 50 samples per trace.
• leakage: t = 25 ;Sbox leakage:Sbox(di ⊕ k) ⊕ m1 + N (0, 1) ,di & m1 are random | k a fixed key byte.
leakage:t = 5 and defined as m1 + N (0, 1).
• All other points on the traces are random values in [0; 255]

N = 10, 000 traces, 2 masks.;
leakage:t = 45;Sbox leakage:Sbox(di ⊕ k∗) ⊕ m1 ⊕ m2 + N (0, 1).

Second order DDLA on ASCAD

16-byte fixed key;plaintexts & masks are random;ASCAD profiling set;MLPexp on the first 20, 000 traces;ne = 50 epochs per guess
在这里插入图片描述
Third order DDLA on ChipWhisperer

N = 50, 000 traces;n = 150 samples with mask m1,m2;masked Sbox:Sbox(d ⊕ k∗) ⊕ m1 ⊕ m2.
MLPexp network;ne = 100 epochs per guess;
reveal key after around 20 epochs per guess;without any leakages combination pre-processing nor any assumptions about the masking method.

在这里插入图片描述
这些是用DL训练每个bit的密钥,最后检查矩阵;
但是以某部分epochs为单位检查矩阵,直到某一步骤可以恢复密钥,可以减少复杂度。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/130352.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)
blank

相关推荐

  • CentOS7在线安装gcc及使用

    CentOS7在线安装gcc及使用CentOS7在线安装gcc及使用一.在线安装gcc(需要配置网络)在虚拟机VMwareWorkstation安装CentOS7后,系统是没有gcc的。进入系统根目录[root@localhost~],输入命令:[root@localhost~]yum-yinstallgccgcc-c++autoconfmake就会自动进行在线安装。完成后输入命令:[root…

  • python 字符串转列表,列表转字符串

    python 字符串转列表,列表转字符串1.字符串转列表python的split()函数个人认为是个相当神奇的存在。比如你有一串字符串需要解析。就可以用split()想去掉字符串里面的回车键:str=a.split(‘\n’)想去掉前两个a字符str=a.split(‘a’,2)就是这么神奇。返回的str还是个列表。2.列表转字符串若想将python的列表转化为字符串,直接去掉‘[…

  • C# 中使用正则表达式 Regex.Matches方法的几个应用[转]

    C# 中使用正则表达式 Regex.Matches方法的几个应用[转]

    2021年11月17日
  • 计算机专业英语词汇1500词

    计算机专业英语词汇1500词计算机专业英语词汇1500词(一)1.filen.文件;v.保存文件2.commandn.命令,指令3.usev.使用,用途4.programn.程序5.linen.(数据,程序)行,线路6.ifconj.如果7.displayvt.显示,显示器8.setv.设置,n.集合9.keyn.键,关键字,关键码10.list…

  • 基于单片机的功放protues_基于Proteus的音频放大器电路设计与仿真详解.doc[通俗易懂]

    基于单片机的功放protues_基于Proteus的音频放大器电路设计与仿真详解.doc[通俗易懂]毕业论文学生姓名尹有友学号171107078学院物理与电子电气工程学院专业电子信息工程题目基于Proteus的音频放大电路设计与仿真指导教师付浩副教授/学士2015年5月论文原创性声明内容本人郑重声明:本论文是我个人在导师指导下进行的研究工作及取得的研究成果。本论文除引文外所有实验、数据和有关材料均是真实的。尽我所知,除了文中特别加以标注和致谢的地方外…

  • 滴滴派单规则分析

    滴滴派单规则分析滴滴派单规则1批量匹配(全局最优)派单策略主要的原则是:站在全局视角,尽量去满足尽可能多的出行需求,保证乘客的每一个叫车需求都可以更快更确定的被满足,并同时尽力去提升每一个司机的接单效率,让总的接驾距离和时间最短。这个算法几乎是所有类似派单系统为了解决这个问题的最基础模型,在Uber叫做BatchingMatching,滴滴叫做“全局最优”或者“延迟集中分单”。2基于供需预测的分单(大数据预测)利用对未来的预测:如果我们预测出未来一个区域更有可能有更多的订单/司机,那么

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号