您的CPU支持该TensorFlow二进制文件未编译为使用的指令:AVX AVX2[通俗易懂]

您的CPU支持该TensorFlow二进制文件未编译为使用的指令:AVX AVX2[通俗易懂]IamnewtoTensorFlow.我是TensorFlow的新手。Ihaverecentlyinstalledit(WindowsCPUversion)andrec

大家好,又见面了,我是你们的朋友全栈君。

本文翻译自:Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

I am new to TensorFlow. 我是TensorFlow的新手。 I have recently installed it (Windows CPU version) and received the following message: 我最近安装了它(Windows CPU版本),并收到以下消息:

Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2 成功安装tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2

Then when I tried to run 然后当我尝试跑步

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
sess.run(hello)
'Hello, TensorFlow!'
a = tf.constant(10)
b = tf.constant(32)
sess.run(a + b)
42
sess.close()

(which I found through https://github.com/tensorflow/tensorflow ) (我通过https://github.com/tensorflow/tensorflow找到)

I received the following message: 我收到以下消息:

2017-11-02 01:56:21.698935: IC:\\tf_jenkins\\home\\workspace\\rel-win\\M\\windows\\PY\\36\\tensorflow\\core\\platform\\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 2017-11-02 01:56:21.698935:IC:\\ tf_jenkins \\ home \\ workspace \\ rel-win \\ M \\ windows \\ PY \\ 36 \\ tensorflow \\ core \\ platform \\ cpu_feature_guard.cc:137]您的CPU支持以下指令TensorFlow二进制文件未编译为使用:AVX AVX2

But when I ran 但是当我跑步时

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))

it ran as it should and output Hello, TensorFlow! 它按Hello, TensorFlow!运行并输出Hello, TensorFlow! , which indicates that the installation was successful indeed but there is something else that is wrong. ,表示安装确实成功,但还有其他错误。

Do you know what the problem is and how to fix it? 您知道问题是什么以及如何解决?


#1楼

参考:https://stackoom.com/question/3BUij/您的CPU支持该TensorFlow二进制文件未编译为使用的指令-AVX-AVX


#2楼

What is this warning about? 这是什么警告?

Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, eg SSE2, SSE4, AVX, etc. From the Wikipedia : 现代CPU提供大量的低级别的指示,除了一般的算术和逻辑,被称为扩展,例如SSE2,SSE4,AVX等。从维基百科

Advanced Vector Extensions ( AVX ) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later on by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme. Advanced Vector ExtensionsAVX )是Intel在2008年3月提出的针对Intel和AMD微处理器的x86指令集体系结构的扩展,并由Intel首先在2011年第一季度发布的Sandy Bridge处理器中得到支持,随后由AMD在Bulldozer处理器中发布。在2011年第三季度发布。AVX提供了新功能,新指令和新编码方案。

In particular, AVX introduces fused multiply-accumulate (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). 特别是,AVX引入了融合乘法累加 (FMA)运算,从而加快了线性代数的计算速度,即点积,矩阵乘法,卷积等。几乎每个机器学习训练都涉及很多这些运算,因此将在支持AVX和FMA的CPU上速度更快(最高300%)。 The warning states that your CPU does support AVX (hooray!). 该警告指出您的CPU确实支持AVX(万岁!)。

I’d like to stress here: it’s all about CPU only . 我想在这里强调一下:这CPU有关。

Why isn’t it used then? 那为什么不使用呢?

Because tensorflow default distribution is built without CPU extensions , such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from pip install tensorflow ) are intended to be compatible with as many CPUs as possible. 由于tensorflow默认发行版是在没有CPU扩展的情况下构建的,例如SSE4.1,SSE4.2,AVX,AVX2,FMA等。默认构建(来自pip install tensorflow )旨在与尽可能多的CPU兼容。 Another argument is that even with these extensions CPU is a lot slower than a GPU, and it’s expected for medium- and large-scale machine-learning training to be performed on a GPU. 另一个论点是,即使有了这些扩展,CPU也比GPU慢很多,并且期望在GPU上进行中型和大型的机器学习训练。

What should you do? 你该怎么办?

If you have a GPU , you shouldn’t care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). 如果您有GPU ,则不必在意AVX的支持,因为大多数昂贵的操作都会在GPU设备上调度(除非明确设置为不这样做)。 In this case, you can simply ignore this warning by 在这种情况下,您可以通过以下方式忽略此警告

# Just disables the warning, doesn't enable AVX/FMA
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

… or by setting export TF_CPP_MIN_LOG_LEVEL=2 if you’re on Unix. …或者如果您在Unix上,则设置export TF_CPP_MIN_LOG_LEVEL=2 Tensorflow is working fine anyway, but you won’t see these annoying warnings. 无论如何,Tensorflow都运行良好,但是您不会看到这些烦人的警告。


If you don’t have a GPU and want to utilize CPU as much as possible, you should build tensorflow from the source optimized for your CPU with AVX, AVX2, and FMA enabled if your CPU supports them. 如果没有GPU,并希望利用CPU尽可能的, 你应该与 AVX,AVX2 你的 CPU优化的源代码编译tensorflow,如果你的CPU支持他们启用了FMA。 It’s been discussed in this question and also this GitHub issue . 这个问题以及GitHub问题中都对此进行了讨论。 Tensorflow uses an ad-hoc build system called bazel and building it is not that trivial, but is certainly doable. Tensorflow使用一个称为bazel的临时构建系统,构建它并不是那么简单,但是肯定是可行的。 After this, not only will the warning disappear, tensorflow performance should also improve. 此后,不仅警告会消失,而且tensorflow性能也应提高。


#3楼

Update the tensorflow binary for your CPU & OS using this command 使用此命令为您的CPU和OS更新tensorflow二进制文件

pip install --ignore-installed --upgrade "Download URL"

The download url of the whl file can be found here 您可以在此处找到whl文件的下载网址

https://github.com/lakshayg/tensorflow-build https://github.com/lakshayg/tensorflow-build


#4楼

CPU optimization with GPU 使用GPU进行CPU优化

There are performance gains you can get by installing TensorFlow from the source even if you have a GPU and use it for training and inference. 即使您拥有GPU并将其用于训练和推理,也可以通过从源代码安装TensorFlow来获得性能提升。 The reason is that some TF operations only have CPU implementation and cannot run on your GPU. 原因是某些TF操作仅具有CPU实现,不能在您的GPU上运行。

Also, there are some performance enhancement tips that makes good use of your CPU. 此外,还有一些性能增强技巧可以充分利用您的CPU。 TensorFlow’s performance guide recommends the following: TensorFlow的性能指南建议以下内容:

Placing input pipeline operations on the CPU can significantly improve performance. 将输入管道操作放在CPU上可以显着提高性能。 Utilizing the CPU for the input pipeline frees the GPU to focus on training. 在输入管道中使用CPU将使GPU腾出精力来进行培训。

For best performance, you should write your code to utilize your CPU and GPU to work in tandem, and not dump it all on your GPU if you have one. 为了获得最佳性能,您应该编写代码以利用CPU和GPU协同工作,如果有的话,不要将其全部转储到GPU上。 Having your TensorFlow binaries optimized for your CPU could pay off hours of saved running time and you have to do it once. 为您的CPU优化TensorFlow二进制文件可以节省数小时的运行时间,因此您只需执行一次。


#5楼

For Windows (Thanks to the owner f040225), go to here: https://github.com/fo40225/tensorflow-windows-wheel to fetch the url for your environment based on the combination of “tf + python + cpu_instruction_extension”. 对于Windows(感谢所有者f040225),请转到此处: https : //github.com/fo40225/tensorflow-windows-wheel,以结合“ tf + python + cpu_instruction_extension”为您的环境获取URL。 Then use this cmd to install: 然后使用此cmd进行安装:

pip install --ignore-installed --upgrade "URL"

If you encounter the “File is not a zip file” error, download the .whl to your local computer, and use this cmd to install: 如果遇到“文件不是zip文件”错误,请将.whl下载到本地计算机,然后使用此cmd进行安装:

pip install --ignore-installed --upgrade /path/target.whl

#6楼

For Windows, you can check the official Intel MKL optimization for TensorFlow wheels that are compiled with AVX2. 对于Windows,您可以检查由AVX2编译的TensorFlow车轮的官方英特尔MKL优化 This solution speeds up my inference ~x3. 此解决方案加快了我的推断速度,达到x3。

conda install tensorflow-mkl
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/145177.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号