论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)论文笔记:Non-ProfiledDeepLearning-basedSide-ChannelattackswithSensitivityAnalysis(DDLA)BenjaminTimoneShard,SingaporeContribution

大家好,又见面了,我是你们的朋友全栈君。

论文笔记:Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis(DDLA)

Benjamin Timon
eShard, Singapore

基础概念

Non-profiling attacks:
假设攻击者只能从目标设备获取跟踪。例如:
Differential Power Analysis (DPA), Correlation Power Analysis (CPA) , or Mutual Information Analysis (MIA).

Profiling attacks:
假定攻击者拥有与目标设备相同的可编程设备。例如:
Template Attacks, Stochastic attacks or Machine-Learning-based attacks.
1.在分析阶段:用收集的侧通道迹,对所有可能密钥值k∈K进行泄漏分析。
2.攻击阶段:基于泄漏分析对侧通道迹进行分类,恢复密钥值k

points of interest(POI): 兴趣点,即侧通道迹中的泄漏点,用POI进行分类。

Deep-Learning(DL): MLP,CNN

de-synchronized: 去同步,未对齐

Contribution

  • 提出Differential Deep Learning Analysis (DDLA) .
  • 着重实现Sensitivity Analysis in a Non-Profiled context.

DDLA

攻击算法
AES

DDLA具体算法
在这里插入图片描述

实验硬件&框架

  • PC:64 GB of RAM, a GeForce GTX 1080Ti GPU & two Intel Xeon E5-2620 v4 @2.1GHz CPUs.
  • ChipWhisperer-Lite(CW) ;Atmel XMEGA128 chip;
    ASCAD(collected from an 8-bit ATMega8515 board).
  • MLPexp & CNNexp.

实验参数
1 Trainings parameters and details

1.1 Loss function
	Mean Squared Error (MSE) loss function for all experiments.
1.2 Accuracy
	The accuracy was computed as the proportion of samples correctly classified.
1.3 Batch size
	A batch size of 1000 was used for all experiments.
1.4 Learning rate
	For all experiements we used a learning rate of 0.001.
1.5 Optimizer
	We used the Adam optimizer with default configuration (β~1~ = 0.9, β~2~ = 0.999, ε = 1e-08,no learning rate decay).
1.6 Input normalization
	We normalize the input traces by removing the mean of the traces and scaling the traces between -1 and 1.
1.7 Labeling
	• For all simulations (first and high-order), we used the MSB labeling.
	• For attacks on the unprotected CW and on ASCAD, we used the LSB labeling.
	• For the attack on the CW with 2 masks, we used the MSB labeling.
1.8 Deep Learning Framework
	We used PyTorch 0.4.1.

PyTorch

2 Networks architectures
Deeping Learning Book
Deep learning website

2.1 MLPsim
	• Dense hidden layer of 70 neurons with relu activation
	• Dense hidden layer of 50 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.2 CNNsim
	• Convolution layer with 8 filters of size 8 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Convolution layer with 4 filters of size 4 (stride of 1, no padding) with relu activation.
	• Max pooling layer with pooling size of 2.
	• Dense output layer of 2 neurons with softmax activation

2.3 MLPexp
	• Dense hidden layer of 20 neurons with relu activation
	
	• Dense hidden layer of 10 neurons with relu activation
	• Dense output layer of 2 neurons with softmax activation
2.4 CNNexp
	• Convolution layer with 4 filters of size 32 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 2.
	• Batch normalization layer
	
	• Convolution layer with 4 filters of size 16 (stride of 1, no padding) with relu activation.
	• Average pooling layer with pooling size of 4.
	• Batch normalization layer
	• Dense output layer of 2 neurons with softmax activation

Experiment & result

CNN-DDLA against de-synchronized traces

Non-Profiled;N = 3, 000 未对齐侧信道迹;CNNexp .
(文章用了MLPexp和CPA,未能成功恢复)

High-Order DDLA simulations (掩码技术泄露处未知): MLPsim

N = 5, 000 traces,1 mask ;
• n = 50 samples per trace.
• leakage: t = 25 ;Sbox leakage:Sbox(di ⊕ k) ⊕ m1 + N (0, 1) ,di & m1 are random | k a fixed key byte.
leakage:t = 5 and defined as m1 + N (0, 1).
• All other points on the traces are random values in [0; 255]

N = 10, 000 traces, 2 masks.;
leakage:t = 45;Sbox leakage:Sbox(di ⊕ k∗) ⊕ m1 ⊕ m2 + N (0, 1).

Second order DDLA on ASCAD

16-byte fixed key;plaintexts & masks are random;ASCAD profiling set;MLPexp on the first 20, 000 traces;ne = 50 epochs per guess
在这里插入图片描述
Third order DDLA on ChipWhisperer

N = 50, 000 traces;n = 150 samples with mask m1,m2;masked Sbox:Sbox(d ⊕ k∗) ⊕ m1 ⊕ m2.
MLPexp network;ne = 100 epochs per guess;
reveal key after around 20 epochs per guess;without any leakages combination pre-processing nor any assumptions about the masking method.

在这里插入图片描述
这些是用DL训练每个bit的密钥,最后检查矩阵;
但是以某部分epochs为单位检查矩阵,直到某一步骤可以恢复密钥,可以减少复杂度。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/130352.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • Docker安装Jenkins教程

    Docker安装Jenkins教程Docker安装Jenkins教程前言一、安装Jenkins1.下载Jenkins2.创建Jenkins挂载目录并授予权限3.启动Jenkins容器4.验证Jenkins容器是否启动二、浏览器访问Jenkins页面1.输入http://192.168.XX.XX:102402.获取管理员密码前言Jenkins是一个开源软件项目,是基于Java开发的一种持续集成工具,用于监控持续重复的工作,旨在提供一个开放易用的软件平台,使软件的持续集成变成可能。提示:如果没有安装Docker,传送门在这里:链接:

  • 安全帽识别系统的应用

    安全帽识别系统的应用应用背景施工现场,安全帽作为一种最常见和实用的个人防护用具,能够有效地防止和减轻外来危险源对头部的伤害。然而,长期以来,我国施工区作业人员普遍存在综合素质低、安全意识不强的问题,尤其缺乏基础防护设施(如安全帽)的佩戴意识,大大增加了作业风险。传统的人工监管存在如下缺点:一、人力成本增加;二、人工长时间监控易疲劳,致使监控的疏忽、遗漏或者误判安全隐患;三、人工监控和人员情绪、状态、工作经…

  • uniqueidentifier类型_unique用法及搭配

    uniqueidentifier类型_unique用法及搭配uniqueidentifier  全局唯一标识符 (GUID)。    注释  uniqueidentifier 数据类型的列或局部变量可用两种方法初始化为一个值:     使用 NEWID 函数。    将字符串常量转换为如下形式(xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,其中每个 x 是 0-9

  • 一次linux启动故障记录

    一次linux启动故障记录

  • bootcamp您的磁盘未能分区_bootcamp无法调整分区大小

    bootcamp您的磁盘未能分区_bootcamp无法调整分区大小朋友把macbookpro拿来让我帮删除下用bootcamp安装的win10系统,于是就打开mac进入实用工具->磁盘工具->点击左侧磁盘列表中的MacintoshHD根目录,右侧选择分区,然后点击右侧分区布局列表中的BOOTCAMP,点下面的『-』号,再点移除,系统提示『您的磁盘不能恢复为单一的分区』。    遇到问题找度娘,结果查询出来的结果是,需要重新安装MAC系统,『NT

  • WinForm设置控件焦点focus

    winform窗口打开后文本框的默认焦点设置,进入窗口后默认聚焦到某个文本框,两种方法:①设置tabindex把该文本框属性里的tabIndex设为0,焦点就默认在这个文本框里了。②Winfor

    2021年12月23日

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号