Subscribe cloud computing RSS CSDN home> cloud computing

How to make the neural network to identify the panda for vulture

Published in14:41 2015-12-31| Time reading| sourceCodewords.recurse.com| ZeroArticle comments| authorEvans Julia

Abstract:In this paper, the author based on reading and testing, in order to try to deceive the neural network approach, from the tool to install the model training, and gradually resolve the neural network and mathematical principles behind. The article also provides a demo code download.

Magic neural network

When I opened Photos Google and searched for "skyline" from my photo, it found a picture of the New York skyline that I shot in August, and I didn't do anything about it before.


When I search for 'Cathedral', Google's neural network will find the cathedral and the church I have seen. It seems so magical.

Of course, the neural network is not amazing, not at all! Recently I have read a paper,"And Harnessing Adversarial Examples Explaining (the interpretation and use of the counter sample)"It further weakens my sense of mystery about the neural network.

This paper describes how to deceive the neural network, so that it made a very alarming mistake. By making use of more simple than you imagine (more linear!) The network fact to do this. We use a linear function to approximate the network!

The focus is to understand, and this does not explain all (or most) types of errors in the neural network! There are many possible mistakes! But it does give us some inspiration on some specific types of mistakes, which is very good.

Before reading this paper, I have the following three points on the neural network:

  • It's very good in the image classification (when I search for "baby", it will find my friend's cute baby photos).
  • We all talk about the "depth" neural network
  • They are made up of a simple function of a plurality of layers (usually sigmoid), and the structure is shown in the figure below:


error

The fourth (and last) understanding of the neural network is that they sometimes make a funny mistake. The spoilers about results: This is the two picture, the article shows the neural network is how to carry on the classification. We can make it believe that the image below black is a piece of tissue, and the panda will be recognized as a vulture!


Now, this result is not surprising to me, because machine learning is my job, and I know that machine learning habits produce strange results. But we need to understand the principles behind this super odd mistake! We are going to learn some neural networks of knowledge, then I will teach you how to make neural network that the panda is a vulture.

Make the first prediction

We first load a neural network, and then do some prediction, and finally break these predictions. That sounds great. But first I need to get a neural network on the computer.

I installed it on my computer.CaffeThis is a neural network software, Vision and Berkeley Center Learning (BVLC) community contributors to the development of. I chose it because it was the first software I could find, and I could download a pre trained network. You can also try the nextTheanoperhapsTensorflow. Caffe has a very clear installation instructions, which means that it takes only 6 hours to get familiar with the work before I officially use it.

If you want to install Caffe, you can refer to the program I wrote, it will allow you to save more time. Just goNeural-networks-are-weird repo theThis warehouse, and then follow the instructions to run. Warning: it will download about 1.5G data, and need to compile a lot of things. Here's an order to build it (just 3 lines!) You can also find the README file in the warehouse under the.

Clone <a href= https://github.com/jvns/neural-nets-are-weird "git" >https://github.com/jvns/neural-nets-are-weird</a>
光盘神经网络是奇怪的
码头工人建立T神经网的乐趣:咖啡。
搬运工跑-我- P 9990:8888 V $ pwd:/神经网-神经网络的乐趣:Caffe /bin/bash C出口PYTHONPATH = /选择/咖啡/ Python和CD /神经网和IPython笔记本--没有浏览器的IP 0.0.0.0

这会启动你电脑中的IPython笔记本服务,然后你便可以用做神经网络预测了它需要在Python。9990端口本地中运行。如果你不想照着做,完全没关系。我在这篇文章中也包含了实验图片。

一旦我们有了iptyon笔记本并运行后,我们就可以开始运行代码并做预测了!在这里,我会贴一些美观的图片和少量的代码片段,但完整的代码和详细细节可以在这里查看。

我们将使用一个名叫googlenet的神经网络,它在2014多个竞赛lsvrc中胜出。正确分类是在耗费94%时间的前5大网络猜测中。这是我读过的那篇论文的网络。(如果你想要一个很好的阅读,你可以阅读一下人类不能比googlenet做得更好这篇文章。神经网络真的很神奇。)

首先,让我们使用网络对一只可爱的小猫进行分类:

下面是对小猫进行分类的代码:

图像= /甲氧苄啶/ PNG的小猫。
#预处理的小猫和调整到224x224像素
净。斑点[数据]数据[…] =变压器。预处理(数据、咖啡。IO。load_image(图像))
#做出预测从小猫像素
forward() =净。
#提取最可能的预测
打印(“预测类# { }。”格式(出[ 'prob ]的[ 0 ]。argmax()))

就这些!仅仅只需3行代码同样,我可以对一只可爱的小狗进行分类!

原来这只狗不是柯基犬,只是颜色非常相似。这个网络对狗的了解果真比我还多。

一个错误是什么样的(以女王为例)

做这项工作时最有趣的事情是,我发现了神经网络认为英国女王戴在她的头上。


所以,现在我们看到网络做了一件正确的事,同时我们也看到它在不经意间犯了一个可爱的错误(女王戴的是浴帽)。现在…我们让它故意去犯错误,并进入它的核心。

故意犯错误

在真正理解其工作原理之前,我们需要做一些数学变换,首先让我们看看它对黑色屏幕的一些描述。


这张纯黑色图像被认为是天鹅绒的概率是27%,被认为是纸巾的概率为4%。还有一些其它类别的概率没有列出来,这些概率之和为100%。

我想弄清楚如何让神经网络更有信心认为这是一个纸巾。

要做到这一点,我们需要计算神经网络的梯度。也就是神经网络的导数你可以将这看作是一个方向,让图像在这个方向上看起来更像一张纸巾。

要计算梯度,我们首先需要选择一个预期的结果来移动方向,并设置输出概率列表表示任何方向,0,1表示纸巾的方向。反向传播算法是一种计算梯度的算法。我原以为它很神秘,但事实上它只是一个实现链式法则的算法。如果你想知道更多,这篇文章有一个奇妙的解释。

下面是我编写的代码,实际上非常简单反向传播是一种最基本的神经网络运算,因此在库中很容易获得。

DEF compute_gradient(图像,intended_outcome):
#把图像和网络预测
预测(图像)
#得到概率的空集
我= NP。zeros_like(净。斑点[ 'prob ]。数据)
#设定我们的预期结果的概率为1
”[ 0 ] [ intended_outcome ] = 1
#做反向传播计算结果的梯度
我们把#和图像
梯度=净。落后(可能=我)
返回[数据] copy()梯度。

这基本上告诉了我们,什么样的神经网络会在这一点上寻找。因为我们处理的所有东西都可以表示为一个图像,下面这个是compute_gradient(黑色,paper_towel_label)的输出,缩放到可见比例。


Now, we can add or subtract a very bright part from our black screen, so that the neural network thinks our image is more or less like a piece of paper towel. Because the image we're adding is too bright (pixel value is less than 1 / 256), the difference is completely invisible. Here is the result:


Now, the neural network with a probability of 16% sure that our black screen is a tissue, rather than 4%! Really smart. But we can do better than that. We can take ten small steps to form a bit like a paper towel every step, rather than in the direction of the paper towel to go one step further. You can see the probability of change over time. You will notice that the probability value is different than before, because we have different step sizes (0.1, not 0.9).


Final result:


Here is the pixel value of the image! They both start from 0, and you can see that we have converted them, making them think that the image is a paper towel.


We can also use 50 times the image to get a better sense of image.


To me, this doesn't look like a piece of paper, but it's probably just like you. I guess the image of all the eddies are teased by the neural network to make it think this is a paper towel. This involves the basic proof of concept and a number of mathematical principles. We're going to have more mathematical knowledge at once, but first of all, let's play some fun.

With neural network

Once I understand this, it will become very interesting. Can we change a cat into a towel:


A garbage can be turned into a kettle / cocktail shaker:


A panda can become.


This map shows that in the 100 step is that the panda vulture, the probability curve changes very quickly.


You can view the code and let the work in theNotebook IPythonRun in. It's really interesting.

Now, it's time to have a little more math.

How to work: logistic regression

First of all, let's discuss one of the most simple image classification methods: logical regression. What is logistic regression? Here I try to explain.

If you have a linear function for the classification of an image is. So how do we use the linear function? Now assume that your image is only 5 pixels (x 1, x 2, x 3, x 4, x 5), the values are between 0 and 255. Our linear function has a weight, such as value (23. - 3,9,2. 5), then the image of classification, we will get the pixel and the weights of the inner product:

Result=23x 1 - 3x 2 Torgovnik Torgovnik +9x 3 Torgovnik +2x 4 Torgovnik - 5x 5 Torgovnik

Assume that the result is 794. So in the end of 794 means that it is a raccoon or not? 794 is the probability? 794, of course, is not a probability. Probability is a number between 1 and 0. Our results in - - to -. People will a value in and H-infinity to infinity turned a probability value of the general method is to use a function called logistic: s (T) = 1 / (1+e^ (- t))

The graph of this function is shown below:


S (794) of the 1, so if we get the weight from the raccoon 794, then we are sure it is a raccoon 100%. In this model, we first use linear function to transform the data, and then apply the logic function to get a probability value, this is the logic regression, and this is a very simple popular machine learning technology.

Machine learning in the "learning" is mainly in a given training set, how to determine the correct weight (for example (23, - 3,9,2, 5)), so that we can get the probability value as good as possible. Usually the larger the training set, the better.

Now we understand what is logical regression, and let's talk about how to break it down!

Break logic regression

This has a gorgeous blog, Karpathy Andrej publishedLinear Classifiers on ImageNet Breaking, explains how to break a simple linear model (not a logical regression, but a linear model). Then we will use the same principle to break the neural network.

This has an example (from a Karpathy article), a number of different food, flowers, and animal linear classifier, visualization as the following figure (click to enlarge).


You can see that the "Smith Granny" classifier is essentially asking "is it green?" It's not the worst way to find out! , and the "menu" classifier found that the menu is usually white. Karpathy explained it very clearly:

For example, the apple is green, so the linear classifier in all the space position, the green channel on a positive weight, blue and red channel on a negative weight. Therefore, it is effective to calculate the amount of the middle of the green component.

So, if I want to make Smith Granny classifier think I'm an apple, what I need to do is:

  • Find out which pixel of the picture is the most concerned about green
  • To care about the color of green pixels
  • Prove

So now we know how to cheat a linear classifier. But the neural network is not linear, it is highly nonlinear! Why is it relevant?

How to work: neural network

In this I must be honest: I am not a neural network expert, I explain the neural network is not very good. Nielsen Michael wrote a book called "Networks and Deep Learning Neural"The book, write very good. In addition,Olah Christopher blogAlso good.

What I know about the neural networks is that they are functions. You enter an image, you will get a list of probabilities for each class has a probability. These are the numbers you see in this article. Is it a dog? no Shower cap? Also not. A solar cell? YES!!

Thus, a neural network is like 1000 functions (each of which corresponds to one probability). But the 1000 function is very complicated for reasoning. Therefore, to do the neural network of people, they put the 1000 probability combination and for a single "score", and called "loss function".

The loss function of each image depends on the actual output of the image. Suppose I have a picture of an ostrich, and the neural network has an output probability Pj, which j=1... 1000, but for every ostrich I want to get the probability of yj. Then the loss function is:


Assuming that the tag value corresponding to the ostrich is 700, then y700=1, the other YJ is 0, L=-logp700.

Here, the focus is to understand the neural network to you is a function, when you enter an image (panda), you will get the final value of the loss function (a number, such as 2). Because it is a single valued function, we assign the derivative (or gradient) of the function to the other image. Then, you can use this image to deceive the neural network, which is used in this article in front of the discussion method!

Breaking neural networks

Here's how to break a linear function / logistic regression and neural networks! That's what you've been waiting for! Think about our image (the cute panda), the loss function looks like:


Among them, the gradient of grad is equal to L (x). Because this is calculus. In order to allow more of the variation of the loss function, we want to maximize the moving Delta and the gradient grad both of the dot product. Let's calculate the gradient through the compute_gradient () function, and draw it into a picture:


Intuition tells us that we need to do is to create a delta, which focuses on the neural network that is important to the image pixels. Now, suppose grad is (- 0.01, - 0.01,0.01,0.02,0.03).

We can take delta= (- 1, - 1,1,1,1), then the grad delta value is 0.08. Let's try it! In the code, that is, delta = np.sign (Grad). When we pass the number of moves, really - now the panda into a weasel.


But, this is why? Let's think about the loss function. We begin to see the results show that it is the panda's probability is 99.57%. Log (0.9957): =0.0018. Very small! Thus, adding a delta times will increase our loss function (making it less like a panda), while less a delta times will reduce our loss function (making it more like a panda). But the opposite is true! I'm still confused about this.

You can't fool a dog.

Now we understand the mathematical principle, a short description. I also try to cheat the Internet, so that it can identify the previous lovely dog:


But for a dog, the network will be strongly resisted to be classified as a thing other than a dog! I spent some time trying to make it believe that the dog was a tennis ball, but it was still a dog. It's a different kind of dog! But still a dog.

I met Dean Jeff at a conference (he worked at Google for a neural network), and asked him for this. He told me that this network in training focused on a bunch of dogs, more than the panda. So he assumes that it is better to train the network to recognize the dog. Seems to be reasonable!

I think it's pretty cool, and it makes me feel more hopeful about training more precisely.

On this topic and another thing more interesting things - when I tried to make the network think the panda is a vulture, it in the middle of the spent a little time to think about whether it is the ostrich. When I asked Jeff Dean with respect to the panda and dog, he casually referred to the "Panda" ostrich space, and I did not mention let network that panda is a condor when thinking about whether it is the ostrich. It was really cool, and he used the data and the network to spend enough time to know clearly that the ostrich and the panda are closely combined in one way or another.

Less mystery

When I started doing this, I had little idea what a neural network was. Now I can make it that the panda is a vulture, and see that it is how clever the dog, I a little about them. I don't think Google is doing amazing, but I'm still confused about the neural network. There is a lot to learn! To deceive them in this way will eliminate some of the mystery, and now know more about them.

I believe you can! All the code in this program is in theNeural-networks-are-weirdThis warehouse. It uses Docker, so you can easily install, and you don't need a GPU or a new computer. The code is in my 3 years of the old GPU notebook running.

Want to know more, please read the original paper:And Harnessing Adversarial Examples Explaining. The paper is brief, well written, and will tell you more about the contents of this article, including how to use this technique to build a better neural network!

Finally, thanks to Guay-Paquet Kamal, Marhubi Mathieu, and other people who have helped me in writing this article!

Original address:To trick a How neural network into thinking a panda is a vulture(Translator / reviser / Liu Diwei Liu Xiangyu / commissioning editor Zhong Hao)

The translator:Liu Diwei, Graduate School of software, Central South University, is concerned with the field of machine learning, data mining and bioinformatics.

top
Zero
step on
Zero