CSDN blog recommended articles Http://prog3.com/sbdm/blog Http://prog3.com/sbdm/static.blog/images/logo.gif CSDN blog content aggregation service Http://prog3.com/sbdm/blog Zh-cn (C) 2011 PROG3.COM Http://prog3.com/sbdm/blog 17:57:18 2016/1/16 Talking about the development of NDK in the old yard Http://prog3.com/sbdm/blog/a345017062/article/details/50529031 Http://prog3.com/sbdm/blog/a345017062/article/details/50529031 A345017062 17:48:56 2016/1/16 About NDK, I also naive.

5, 6 years ago, just to get the HTC G1 Android development, that Java can and c mixed excited, really is to get the keys to see what is the lock, often wanting to use the NDK do something. Then the company to do a business decision-making with strong technical style, I was forced to NDK tossing back and forth for a long time, also can't toss out what results, but the decision but the company toss finished, I also on NDK gradually restore the heart cold, really appreciate, do business, service users, technology, good use is king.

Today received an e-mail (from: grp0916@qq.com), let me talk about NDK, really scratch my itch, there has been some words to say to the people who want to talk with the NDK, so there is this article.

NDK is just a supporting role

Or that sentence, do business service users is the embodiment of the final value of technology, technology is only in the efficiency of the creation of services to compete, language characteristics, speed, the audience is not a real decision factors. Java from the year of nondescript today unrivaled arena, rely on ecological system is perfect, make business background of the efficiency of the mentioned previously shown some level. When it comes to Android, Java is because I wanted to borrow the Java ecosystem formed their own ecological, and resorted to the NDK, is only a supplement to the ecological system has been formed. Decided on the characteristics of human tissue, for the system has been very mature, there is no danger of destruction is there will be no fundamental change, is more of a correction that would otherwise be wasted.

Above a few words is simple for NDK set, auxiliary role in the Android ecosystem, Java and together for the Android ecosystem building blocks. Next, look at what it adds to the brick.

NDK usage scenarios

1, requires high performance computing scenarios. Does not involve the role of complex projects in collaboration, only to be used to calculate the words, C's advantage is unparalleled. So you have to do some complex algorithms, or in some of the data processing is far greater than the data exchange scene, enjoy the use of it.

2, reuse the existing framework. Like this ecological infrastructure in the field of computer multimedia, network and graphics, as well as the Linux in the many outstanding projects, often based on C to build, tortuous history, complex engineering, large amount of computer, to directly move over must use NDK to bypass.

3, safety. Java's advantage lies in the ecology, before Android appears, the code is mainly run on the server, the security problem is much less. Now do the App release, then a lot of defects on the exposed. What the logic does not want people to see, or what needs to be encrypted storage, then NDK it, plus a shell, to do a run-time encryption and decryption and so on, are its strengths.

4, script. For some super App, the complexity of the business beyond imagination, and sometimes require scripting support, it can only be introduced through the NDK bypass scripting engine. And a subset of second.

5, cross platform. The same business platform products, there must be some common things, like I did some of the color of the lottery, such as the kind of lottery prize. The same thing, using Java to write it again, and then OC to write again, a waste of time, repeat the test, the trouble is very.

6, for the optimization of the hardware platform. If a mobile phone using the GPU, then want to play the characteristics of this GPU to the extreme, only to use the NDK frame to optimize the code.

7, NDK exclusive API. Mostly Linux standard API, Android is a reminder for a vest Linux. There is a part of the Java end is not called, although the possibility is very small, but do not rule out which day will be used to understand, there is no harm to spare.

8, Hook. This is the most classic Xposed. Domestic mobile phone Taobao architecture group improved after the cut version of the xposed influence is not small, and indeed solve the part just need, positioning in the process to hook procedure itself of any API and based on this reduced version of the xposed to make a complete set of client monitoring system. There are a lot of big companies in China, like the United States, what are the group, but also to explore this child. This is the amount of users to a certain level after the required.

Easy to step on the pit of the place

NDK environment on the development, I understand more deeply, but also a little bit of the road will take some detours:

1, write the code of the environment, try several combinations, found in the Xcode or the development is completed, and then to the NDK environment, the compiler is better.

2, Java and C between the bridge, that is, JNI, but it, with the old yard in 10 years, when the translation of the Sun version of the official electronic version of the JNI specification, there is no problem to solve the problem. (Http://prog3.com/sbdm/blog/a345017062/article/category/1256568)

3, the compiler system. NDK development is to do with Makefile Android, and its running logic with the standard Makefile Linux basically consistent, but there are a lot of differences in the details. But with the old yard farming in 11 years to do a few concise tutorial, you can quickly get started.
Http://prog3.com/sbdm/blog/a345017062/article/details/6130264
Http://prog3.com/sbdm/blog/a345017062/article/details/6096795
Http://prog3.com/sbdm/blog/a345017062/article/details/6442325

4, crash NDK collection. For code confidentiality, usually so release to remove the Debug information, and that how to collect and locate the C layer code Crash information? Just look at google-breakpad. Have a Taobao based on it do a crash collection system, it is very convenient and practical, not  know now whether to external open source.

Author: a345017062 published in 17:48:56 2016/1/16Text link
Read: 0 comments: 0View comments
]]>
Ubuntu VMware installation details Http://prog3.com/sbdm/blog/u013142781/article/details/50529030 Http://prog3.com/sbdm/blog/u013142781/article/details/50529030 U013142781 17:47:44 2016/1/16 Not every programmer must be played with Linux, but I think now many servers are Linux system, and they belong to the kind of front-end is also, background is to build the framework are also interested, but a lot of production frameworks and tools are installed on the server, and there are many large companies are familiar with the requirements of development in Linux, so from the personal and professional development is necessary to know more about Linux.

(the main Internet search a bit, and now most of the server installed system mainly: server CentOS, Ubuntu, Linux Enterprise SUSE, Linux RedHat, etc.)

So today the blogger on the virtual machine installed Ubuntu, and to share the entire installation process to everyone. The reason why there is no installed dual system is considered a lot more inconvenient place, for example, you in the Linux developed an example, you want to write a blog, blog is certainly at the top of the windows and many of the code need to get another system from a system, here is the dual system is not convenient. In addition, the virtual machine to install Linux another advantage is that you can download the tool on the windows installation package, and then installed on the linux.

Well, the following is a complete installation of the whole process.

First, download the Ubuntu image file

Download address:Http://www.ubuntu.com

Open the above connection, we came to the following page, click download:

Write the picture here.

Next, to the following page, click Desktop Ubuntu:

Write the picture here.

Then select 14.04.3 LTS release notes Ubuntu:

Write the picture here.

Then select Desktop and Server Ubuntu:

Write the picture here.

Next we choose the desktop version of PC desktop (AMD64) 64-bit, the main computer is 64. Another difference between desktop and server version is: desktop version for personal computer
Users, can handle word processing, web browsing, multimedia play and play games. In essence, this is a versatile operating system for ordinary users. On the other hand, the server version is intended to serve as a web server, which can be used for hosting files, web pages, and similar content.

Write the picture here.

Click on the above connection can be downloaded, the capacity of about 1G, the time is a little long, and then we look at the VMware download and install the VMware installed on the Ubuntu process.

Two, VMware download and install

VMware download here, download directly to Baidu, ha ha:

Write the picture here.

Download to complete the installation, the installation can take the default, to see the individual, the main installation directory is optional.

After the completion of the installation start VMware, you need to enter the product key (workstation VMware 12), the blogger Baidu a casual, ape friends to see if you are effective:

FiveA02H-AU243-TZJ49-GTC7K-ThreeC61N

VMware installation is completed, the following describes the installation of Ubuntu in VMware.

Three, in the VMware installation Ubuntu

1, create a virtual machine

Write the picture here.

2, the wizard choose to customize

Write the picture here.

3, then the next step and then the next step, until here, and then install the system

Write the picture here.

4, and then choose Linux, note here the following drop-down selection Ubuntu64. Because we download is a 64 bit, if your computer is 32 bits, choose Ubuntu can, Bo Lord is because Ubuntu, resulting in behind the installation error, but also can be set.

Write the picture here.

5, select the installation location, where you have to enter a directory that already exists, or the back will be reported to the wrong

Write the picture here.

6, the back of the processor and memory settings, the computer can be configured to try, otherwise the default, the blogger here is the default, and then the next step... Until here, select the virtual machine to be stored as a single disk:

Write the picture here.

7, then the next step, to the following page, click on the custom hardware:

Write the picture here.

8, and then as the following figure to choose our first step to download the Ubuntu image:

Write the picture here.

9, then click finish, the wizard set to complete

Write the picture here.

10, the virtual machine has been configured, and then we open the virtual machine:

Write the picture here.

11, then it will come to the following interface, if there is an error, please see (four, may be encountered errors), we choose to click on the Chinese simplified Ubuntu Install:

Write the picture here.

12, to the following interface, we click to continue:

Write the picture here.

13, then click, start the installation:

Write the picture here.

14, to this interface, click to continue:

Write the picture here.

15, as follows, enter your location, just enter it:

Write the picture here.

16, and then choose Chinese, click to continue:

Write the picture here.

17, set the user name password, where the blogger select automatic login

Write the picture here.

18, the following is the official installation of the:

Write the picture here.

19, after the completion of the installation, will be prompted to restart, click now to restart

Write the picture here.

20, after the success of the restart, will come to the desktop, the restart of the main problems encountered in the restart, manually restart, the impact is not:

Write the picture here.

21, we open the browser, enter the Baidu web site, found that access to the network:

Write the picture here.

Installation completed!!!!!!!!!!

Four, possible mistakes

In the installation process, the blogger encountered the following error:

4.1, kernel requires an x86-64 CPU but, only detected an i686 This CPU. as follows:

Write the picture here.

Possible reason: we chose Ubuntu as the following page when creating the virtual machine wizard, but not Ubuntu64, but I downloaded the image is 64 bits, as shown below:

Write the picture here.

Solution: set back to Ubuntu64 as follows, and then continue with the steps you did not complete:

Write the picture here.

4.2, reported the following error:

Write the picture here.

The possible reason is: your computer does not support virtualization

Solution: restart the computer, enter the BIOS (the main into the BIOS key is F10), set the virtual enabled.

Select Security option after entering bois:

Write the picture here.

Select Virtualization, enter, press the + key to modify the two options for enabled:

Write the picture here.

And then press F10, enter the y to save the exit can be.

Set up the virtual after the completion of the above did not complete the steps can be

4.3, if the installation of the virtual system after the failure to restart (such as stay in a page for a long time), manually restart Ubuntu

Author: u013142781 published in 17:47:44 2016/1/16Text link
Read: 0 comments: 0View comments
]]>
Android advanced intermediate tutorial 1.1 of Git explain the use of local Http://prog3.com/sbdm/blog/coder_pig/article/details/50529019 Http://prog3.com/sbdm/blog/coder_pig/article/details/50529019 Zpj779878443 17:42:06 2016/1/16 Android advanced intermediate tutorial 1.1 of Git explain the use of local

Label (space): advanced Android


1 Introduction

The fundamentals of Android series before we explain the simple use of Git, we advanced a series of
Explain the Git system, the use of basic commands; the work area, storage area, the historical concept of remote warehouse, warehouse,
Team collaboration in the branch management, Studio Android in the use of Git and so on; Git is a fast distributed version
Control systems, and other versions of the control system, the difference is that Git direct recordingsnapshotRather than differences in comparison!
The version control system, which is different, only cares about the difference of the contents of the file, and then each record is updated,
And update the contents of the row! When we want to switch to a previous version of the time, we need merge, and
Git in every commit, willComplete storage of all modified files in the current version., rather than just store diff,
So want to switch to a historical version of a simple reset can only be!


Several parts of 2.Git

Before we start using Git we need to know the four parts of the Git:

Here are four parts to introduce respectively:

work areaNeedless to say, our current work space, and the other three parts, we give an example to help understand the image of the point bar:
Everyone should have online shopping, such as shopping at a cat supermarket, to see the right goods, we can add to the goods
The shopping cart (buffer), we may add more frequently.AddOr remove the goods (or)Checkout),
This process we can casually hey, anyway, do not have to give money; when we pick seven seven eight eight, and then we will be submitted to our order,
Click submit orderCommit), will generate a list of goods orderssnapshot), we can add a little at the time of submission.
Note information, such as what color and so on.-m commit "color"(at this time the order has not been paid.Push), we
Can be in their own accountUntreated order list (local warehouseFind our order (in).snapshotAlso can see oneself
Some previous order records; well, then we choose this not to pay the order, and then make the payment.PushPayment, payment
After completion,Merchant (remote warehouse)Will receive your order, and then shipped...

Believe that with this figure as well as the image of the example of online shopping, you should be able to understand the concept of the four parts of the Git, and the operation of Git
Most of them are performed locally, the following is the download and installation of Git


Download and install 3.Git

After installing the Git, we can open the Git command line:

WindowsRight click in any location, click Bash Git to open the Git command line
UbuntuUnder the direct open Terminal can, Terminal shortcut keys are:CTRL + Alt + T

Here's why you want to use the command line:
Forced Glygoyle command familiar can fine mercilessly show wave; of course, in fact, more reduce cross platform use git cost.
For example, you are familiar with the Win under the GUI of a Git tool, but to join one day you want to migrate to the Linux, the whole command
OK, how do you play... Or need to change another new GUI tool, you need to spend time to know the use of this tool.
See yourself, there is no need to command line, you can also use graphical tools, followed by the use of Studio Git on the next Android!
PS: originally intended to be completed under the command of the Ubuntu demo, and later found that they have to deal with the screenshot screenshot, because
Ubuntu can only screen shots or screenshots, so, directly on the win to demonstrate the ~.


4 set up your identity information

Install Git to doThe first thingIs the configuration of your identity information, team development, in case where the wrong question, who back the pot? Be put!
Type the following instructions:

Config git -GlobalUser.name"Coder-pig"
Config git -GlobalUser.email"779878443@qq.com"

After the configuration is finished, the part of the information removed, one more time to lose the above instructions, you can see whether the configuration is successful

You can also type the following command to view all the GIT settings:

Git Config --List


4 get help

As with any other command line, GIT also has a help command that you can type when you encounter a command that has not been seen or forgotten:

GitHelpInit

Change the init to the one you want to query! such asHelp add Git

WinWill open a Git.Manual (manual)Page, you can see some of the instructions in the use of the
andUbuntuIn the command line will be directly output:

You can also go toGit official manualFind the corresponding instruction!


5 create a local code repository

You can directly type the following command to create a new project with the Git repository:

Init GitForTest Git

Change the GitForTest to the project you want to create! Then we can come to the directory of the newly created works, where it is needed
Under the modified to hide files visible, you can see the.Git folder, which is the east of our git warehouse East, remember
Don't change or delete anything! (you can also type: -ah ls to view the hidden files)

Of course if you have a project to before projects based on the add a git repository, then through the command line or git bash to the project directory folder, type the following command can be for your project to add local git Repository:

Init Git

6 put files in the temporary storage area

We can put the new or modified files, we can passAdd GitThe instruction is added to the buffer
You can use the following instructions, one by one to add the file:

GitAddREADME.md

To add a lot of files, think to add up to a lot of trouble, you can add more than one:
Here's the next term:TrackedRepresentatives have joined the Git warehouse,UntrackedRepresentatives have not yet joined the Git warehouse!

1) all tracked file has been modified or deleted the file information is added to the GIT buffer, not deal with untracked file!

GitAdd-u

2) all tracked files are modified or deleted file information is added to the Git warehouse, will be the rationale for untracked file information
Also adding Git buffer

GitAdd-A

3) put all the files in the current working areas are all added to the Git buffer

GitAdd.

In addition to the above two, GIT also provides an interactive model, we can type:

GitAdd-i

The process above is like this.:

1 I now GitForTest folder to create two files
2 type add -i git, after entering, type 4, choose to add the untracked file
3 he gives us a list of untracked files, and then we add the file according to the serial number.
4 input will pop up the relevant tips, and then directly enter, end of selection!
5 and then enter the add -i git, enter 4, you can see that there is no untacked file!

Of course, there are several other instructions, limited space, interested can study their own!


7 the buffer content submitted to the local warehouse (History)

We can passCommit -m XXX "git"The instruction buffer content submitted to the warehouse

GitCommit-m"Modified xxx"

Behind the -m is the submission of the note, "XXX" is the content, not to be lazy to save, if you do not here
Input, XXX "-m", then it will let you enter the Vim to prepare the information of the declaration of the ~ so it is recommended to submit the contents of the note here!

In addition, our project may exist for several hundred years the same or automatically generated files, such as lib, gen, bin folder, etc., we do not need
Each of these are commit, we can in the same level of.Git directory to create a file named.Gitignore, and then edit the contents of the
Do not need to submit the file to write, then commit will automatically ignore these files:


8 see the current state of the work area and buffer area

We can useStatus GitThe current situation of the instruction to check the work area and buffer area, work area such as what file and buffer file
Contrast, has changed, or to add under? Another example is temporary area where there are things add, but did not submit the following commands, type:

Status Git

For example, I here to modify the README.md file to
Just changed no add:

After add file:

Submit commit buffer contents

Well, very simple, in addition, you can also use the following instructions, so that the results in a brief form of output

Status Git-s

The difference between the 9 view the work area and temporary storage area

Above we can get the status of the current working area and the cache area by status git, just the state,
If we need to see what happens, then you need to type the following:

Diff Git

So you can see the comparison of current work area and buffer area, made what changes!
PS: above is that we have added a statement in the README.md file, and then enter the diff git!


10 view the records submitted for change

Do you remember the example of online shopping, we can find their order records in my order,
Similarly, in git, we can also view all commit records! You can type the following instructions:

GitLog

Of course, you can also call the following instructions to obtain a more streamlined results

GitLog --oneline

If the above can not meet you, you can refer to:The Commit History Viewing
Customize the log, such as:


11 delete file + file recovery (not add to buffer)

We can delete a file directly to the right, or enter the command line, type xxx.xxx RM to delete the file, but only to delete the file.
The current workspace files, temporary district still exist of the file, so you typing git status at this time, you will find:

Git tells you that the workspace file is deleted, and then you have two options:
1) the temporary zone files are deleted, then type:

Git RM"Xxx.xxx"
Commit Git-m "Xxx"

2) deleted, restore buffer file to the workspace, you can type a:

Git Checkout -- XXX.XXX

Duang! The deleted files are back again.
Of course, the above applies to more than checkout accidentally deleted files, when you send a file to be beyond the,
You suddenly regret it, but you have ctrl+s saved many times the code, you can use the instructions to return to
The original appearance of this document! (The premise is that you have not add!)


12 file recovery (add to buffer, commit)

If you have the file with git add to the buffer, then you directly with the checkout file is not working!
We need the GIT reset instruction to abolish the modify records (version rollback), state the file back to a submission to the!
Type:

GitResetXxx.xxx HEAD

Call at this time:

Git Checkout -- XXX.XXX

Files can be restored to the original!


13 (COMMIT) - File Recovery Version rollback

If, our file has been modified commit, and you have no reason to regret it, want to recover to the last commit file,
And or last, this time you may start square, but git for us provides a time machine (version rollback), we can
Through the following instructions back to a previous version:

GitResetHEAD^

Well, we can see the log type git version has been back to a previous version!
If it is the last version, you only need to add ^ to a version in addition to ^, and so on!
Of course, in addition to the above, you can also according to the version number to return here, as I retreated to the first version:

Git Reset --Hard 8c3f91f

Hey, no pressure, you suddenly regret it, want to go back to the new version, ah... All right, it's just the same:
But the version number changed to the latest version number of the commit can be!

Git Reset --Hard Cf2d155

You may falter over to me said: "well, that I've just put off the command line, the latest version number could not find,
Log git also can not find the latest version number, and that I can not go back to the future?" Fortunately, Git time opportunity records
Each instruction you enter, you only need to type the following:

Reflog Git

Get the version number, reset git can ~


The 14.Git command automatic completion

Enter the Git command by the time of the two Tab on it!


15.Git command alias

If you want to be lazy, want to knock a few letters, you can set the name for the command, and then type the alias call the corresponding instructions, such as the status is set to st:


16 summary

This section gives you a detailed explanation of the GIT local use, most of the time we are all git in local operations, so cooked
It is important to note that these commands! Of course, this is the need for a little bit accumulation, knocking more, knock up freely flowing style of writing! In the next section, we
To learn remote warehouse, branch management, team development in collaboration, and so on ~ thank you!


Reference:

Author: zpj779878443 published in 17:42:06 2016/1/16Text link
Read: 0 comments: 0View comments
]]>
The second chapter of the third series Http://prog3.com/sbdm/blog/u013595419/article/details/50529017 Http://prog3.com/sbdm/blog/u013595419/article/details/50529017 U013595419 17:41:34 2016/1/16 The basic definition of a string is already in theThe second chapter stacks and queues as well as stringsIn the introduction, and the stack and queue similar, there are the same sequence of storage of the structure of the string (here referred to as the sequence chain) and the chain structure of the chain store (here referred to as the chain).

Sequence string

1.1 definition

On the order of implementation refers to the distribution of a continuous storage unit for string elements, with a sign indicating the length of the string.
The sequential structure of the string can be described as:

Typedef CharElemType;
Typedef Struct{
Data[MaxSize] ElemType;
    IntLength;
}String;

1.2 basic operations

1.2.1 create string

The input character elements, as in "#" marked the end of.

VoidStrCreat (S String*) {
    CharX;
S->length=Zero;
    Printf(Input String_S (End of'#'!): \n");
    Scanf("%c"&x);
    While(x =!'#'{)
S->data[S->length++]=x;
        Scanf("%c"&x);
}
}

1.2.2 string length

Because there are length variables in the definition of a string, just return the result.

IntStrLength (S String*) {
    ReturnS->length;
}

1.2.3 copy string

Copy the string T to the string S, in order to access the string, while assigning the T to complete the copy of the string.

VoidStrCopy (S String*, T String*) {
    For(IntI=ZeroI<S->length; i++;) {
T->data[i]=S->data[i];
}
T->length=S->length;
}

1.2.4 compare string size

Size of a string comparison is comparing ASCII code size, from left to right are compared in turn, if at this time which the string of ASCII code value is relatively large, then return the result; if two string length is not equal, but are equal to the front of a number of characters, the length of the string is larger.

IntStrCompare (S String*, T String*) {
    IntI=Zero;
    While(I! =S->length&&i! =T->length) {
        If(S->data[i]<T->data[i]) {
            Return-One;
}Else If(S->data[i]>T->data[i]) {
            Return One;
}Else{
I++;
}
}
    If(i<S->length) {
        Return One;
}Else If(i<T->length) {
        Return-One;
}Else{
        Return Zero;
}

}

1.2.5 connection string

String T connected to the string S, pay attention to the length of the S in order to pay attention to modify the length variable.

VoidStrConcat (S String*, T String*) {
    IntI=S->length;
    For(i=i+OneI<S->length+T->length; i++;) {
S->data[i]=T->data[i-S->length];
}
S->length=i;
}

1.2.6 to string POS in the S position to start the len character string Sub

Because the use of a sequential storage structure, you can use the function of random access, directly to find the location of the POS, and then len will be assigned to the new string T can be.

SubString String* (String*S,IntPos,IntLen) {
*T String;
T= (String*)Malloc(Sizeof(String));
T->length=Zero;
    If(pos>S->length|| (pos+len) >S->length) {
        Printf("Position \n! Illege");
        Exit(Zero);
}
    For(IntI=pos; i<pos+len; i++;) {
T->data[T->length++]=S->data[i];
}
    ReturnT;
}

Two. Chain string

2.1 definition

Chain store chain is used as the chain. A single linked list implementation is generally used without the lead node.
The data structure of the chain string is described as follows:

Typedef StructSNode
{data ElemType;
     Struct*next SNode;
}String;

Because the chain store with the string andThe first chapter of the single linked listOperation is similar, here is no longer detailed description.

Author: u013595419 published in 17:41:34 2016/1/16Text link
Read: 0 comments: 0View comments
]]>
Deep learning and computer vision series (5) _ backpropagation with its intuitive understanding Http://prog3.com/sbdm/blog/han_xiaoyang/article/details/50321873 Http://prog3.com/sbdm/blog/han_xiaoyang/article/details/50321873 Yaoqiang2011 17:17:19 2016/1/16 Author:Han Xiaoyang& &Long Xinchen
Time: December 2015.
SourceHttp://prog3.com/sbdm/blog/han_xiaoyang/article/details/50321873
Statement: All rights reserved, please contact the author, please contact the author and indicate the source.

1 Introduction

In fact, a start talking about this part, I refused, because I think there is a feeling of writing a high number of classes. While the general intuitive understanding of a back-propagation algorithm is the derivation of the chain rule. But for God's sake, understand this part and the details for neural network design, adjustment and optimization is useful, so hard with scalp and write about it.

Problem description and motivation:

  • As we all know, in fact, we are in a given image pixel vector x and the corresponding functionF(X)And then we hope to be able to calculateFstayXOn the gradientF(X))

  • We want to solve this problem, because in the neural network,FCorresponding loss functionL, and inputXThe weights of the training sample data and the neural network are corresponding.W. To give a special case, the loss function can be loss function SVM, while the input is corresponding to the sample data(XI,YI),I=One...NAnd weights as well as biasW,B. One thing to note is that, in our scenario, we usually think that the training data is given, and the weights are variables that we can control. So we have to update the weights of the parameters, such as the minimum value of the loss function, we are usually calculatedFPair parameterW,BGradient. But we calculate it inXISometimes the gradient is useful, for example, if we want to visualize and understand what the neural network is doing.

2 / high gradient based partial derivative

Well, now began to review the high number of classes, starting from the most simple example, ifF(X,Y)=XY, then we can ask this function toXandYPartial derivation, as follows:

F(X,Y)=XYToWellFWellX=YWellFWellY=X

2.1 interpretation

We know the meaning of the partial derivative: a function in the dimension of a given variable, and a change rate near the current point. That is:

DF(X)DX=LimH ToZeroF(X+H).F(X)H

In the above formulaDDXRole inFOn, that the X for partial derivative, is expressed in the X dimension of the current point location around a small region of the rate of change. For example, ifX=Four,Y=.ThreeBut,F(X,Y)=.Twelve, then x on the partial GuideWellFWellX=.Three, which tells us that if this variable (x) increases a very small amount, then the whole expression will be reduced by 3 times the amount of this quantity. We have changed the formula above, you can see:F(X+H)=F(X)+HDF(X)DX. In the same way, becauseWellFWellY=Four, we will increase the value of y a very small amount of H, then the entire expression changes 4h.

Partial derivation on each dimension / variable, representing the entire function expression, "sensitivity" on this value.

Oh, yeah, we're saying that the gradientFIn fact, is a partial derivation of the vector, for example, we haveF=[WellFWellX,WellFWellY]=[Y,X]. Even if the strict sense, the gradient is a vector, but in most cases, we still used gradient call "X", rather than "x partial derivative"

We all know that the partial derivative of the addition operation is the:

F(X,Y)=X+YToWellFWellX=OneWellFWellY=One

For some other operations, such as the max function, the partial derivative is such that the following bracket is expressed in this condition:

F(X,Y)=Max(X,Y)ToWellFWellX=One(X>=Y)WellFWellY=One(Y>=X)

3 chain rule of complex function partial derivation

Consider a little bit of a function, such asF(X,Y,Z)=(X+Y)Z. Of course, this expression is actually not so complicated, but also can directly seek partial guide. But we use a non direct idea to solve partial guide, in order to help us understand the reverse transmission. If we use the replacement method, the original function is divided into two partsQ=X+YandF=QZ. For these two parts, we know how to solve the bias on the variables:WellFWellQ=Z,WellFWellZ=Q WellQWellX=One,WellQWellY=One, of course, q is a variable of our own, we are not interested in his partial guide.
The chain rule tells us a way to get the partial derivative of our interest in the "series" of the partial derivative formula:WellFWellX=WellFWellQWellQWellX

See an example:

X = -Two= y;FiveZ =.Four

# forward calculation
Q = x + y# Q becomes 3
F = Q * Z# f becomes -12

# reverse propagation:
# first count to f = Q * Z
Dfdz = q# df/dz = q
Dfdq = Z# df/dq = Z
# count to q = x + y
= dfdxOne* dfdq# dq/dx = 1 um, the chain rule
= dfdyOne* dfdq# dq/dy = 1

The result of the chain rule is that we are only interested in it.[dfdx, dfdy, dfdz], that is, the original function in X, y, Z on the partial guide. This is a simple example, after the procedure in which we are simple, not completeDfdq, but to useDQIn place of.

The following is a schematic diagram of this calculation:


In 1 cases

4 intuitive understanding of the reverse transmission

A sentence summary: the process of reverse transmission, in fact, is a local to the whole process of the exquisite. For example, the above circuit diagram, in fact, each of the gate in the get input, can calculate the 2 things:

  • Output value
  • Local gradient of input and output

And it is clear that each door is completely independent of the calculation, and does not need to understand the structure of the circuit diagram. However, during the whole process of the forward transmission, each gate can gradually accumulate the gradient of the output of the whole circuit.Chain ruleTell us every door receives to the gradient, and multiply it by their calculated for each input of local gradient, and then go to the rear.

Take the above picture for example to explain the process. The addition gate receives the input [-2, 5] simultaneously outputs the result 3. Because the addition operation on the two input bias should be 1. The final part of the circuit to calculate the final results -12. In the propagation process, the chain rule is this: add operation output 3. In the final multiplication operation, obtain the gradient for 4, if the entire network anthropomorphic, we can argue that this represents network "to" add operation results were smaller, and is to the strength of the 4x to reduce. After the addition operation of the gate to get this gradient -4, it is multiplied by the local two gradient (additive bias is 1), 1*-4=-4. If the input x is decreased, the method output will decrease, which will be a corresponding increase in output multiplication.

Back-propagation can be seen as a web portal and portal between "association dialogue", they "want" their output more or less (at a rate much), so that the final output results in greater.

5 Sigmoid example

The examples cited above actually in practical application is rare, we often see network and gate function is more complex, but no matter it is what kind of, backpropagation can be used, the only difference is likely to dismantling of the network out of the gate function layout is more complex. We take the prior logic regression as an example:

F(W,X)=OneOne+E.(WZeroXZero+WOneXOne+WTwo)

This seemingly complex function, in fact, can be seen as a combination of a number of basic functions, these basic functions and their partial derivation is as follows:
F(X)=OneXToDFDX=.One/XTwoFC(X)=C+XToDFDX=OneF(X)=EXToDFDX=EXFA(X)=AXToDFDX=A

Each of these basic functions can be considered as a gate, so simple elementary functions can be combined together to complete the complex functions of the mapping function in the logistic regression. Below we draw the neural network, and give the specific input and output parameters of the numerical:


In 2 cases

In this figure, [x0, x1] is input, the [w0, W1, w2] as adjustable parameters. So what do it is input to do a linear calculation (the inner product of X and W), at the same time, the results into the sigmoid function, thus mapping to (0,1) between the number.

In the example above, complete the inner product between W and X is decomposed into small function of a long list of connections, and then connected to the sigmoid functionSigma(X), it is interesting that the sigmoid function seems to be complicated, but it is a skill to solve it, as follows:

Sigma(X)=OneOne+E.XToDSigma(X)DX=E.X(One+E.X)Two=(One+E.X.OneOne+E.X)(OneOne+E.X)=(One.Sigma(X))Sigma(X)

You see, its derivative can be used to re express their own very simple. So in the calculation of the derivative is very convenient, such as the sigmoid function to receive the input is 1, the output result is -0.73. So we can easily calculate its partial derivation (1-0.73) *0.73~=0.2. We look at the sigmoid function in this part of the back propagation calculation code:

W = [Two-.Three-.Three]We were given a set of weights #
X =.One-.Two]

# propagation
Dot = w[Zero]*x[Zero] + w[One]*x[One] + w[Two]
= fOne(/One+ math.exp (-dot)# sigmoid function

# back propagation through the sigmoid neurons
Ddot = (=.OneF) * f * * *# sigmoid function partial derivative
DX = [w[Zero* ddot, w[One* ddot]Reverse the spread of # in X on this path
DW = [x[Zero* ddot, x[One* ddot,One* ddot]Reverse the spread of # in W on this path
# yes! It's done! Is not very simple?

5.1 project to achieve a small tip

Look back on top of the code, you will find, actually writing the code to achieve, a skills can help us easily to achieve reverse propagation, we will before spreading to decomposition into back-propagation easily back part.

6 back propagation: complex functions

We look at a slightly more complex function:

F(X,Y)=X+Sigma(Y)Sigma(X)+(X+Y)Two

Amount, insert a sentence, this function does not have any practical significance. We mention it, just want to give an example to illustrate how to use the reverse propagation of complex functions. If you are directly on the function of X or Y, you will get a very complex form. But if you use the reverse propagation to solve the specific gradient of the words, but there is no such trouble. We put this function into a small part of the forward and backward propagation calculation, you can get the results, the forward propagation calculation code is as follows:

= xThree # example
Y = -Four

# propagation
= sigyOne(/One+ math.exp (-y)# single value of the sigmoid function
Num = x + sigy
= sigxOne(/One+ math.exp (-x)
Xpy = x + y
Xpysqr = xpy**Two                 
Den = sigx + xpysqr
Invden =One/ den
F = num * invden# completed!

Note that we do not have a one-time forward to the final results of the spread out, but deliberately set aside a lot of intermediate variables, they are we can directly solve the local gradient of the simple expression. Therefore, it is easy to calculate the reverse propagation: we look at the final results, the forward calculation of each of the intermediate variablesSigy, num, sigx, xpy, xpysqr, den, invdenWe will use, but the back to the back of the deviation of the value of them, to get the reverse propagation of the partial derivative. Back propagation calculation code is as follows:

# local function expression for F = num * invden
Dnum = invden
Dinvden = num
# local function expression for invden = 1 / den
Dden = (-One(den**Two* dinvden * * * * *
# local function expression for den = sigx + xpysqr
Dsigx = (=.One* dden * * *
Dxpysqr = (=.One* dden * * *
# local function expression for xpysqr = xpy**2
Dxpy = (=.Two* xpy * dxpysqr * *# (5)
# local function expression for xpy = x + y
(= DXOne* dxpy * * *
(= dyOne* dxpy * * *
# local function expression for sigx = 1 / (1 + math.exp (-x))
((+ = DXOneSigx * sigx * dsigx * * *# noted here is + =!!
# local function expression for num = x + sigy
(= DXOne* dnum * * *
Dsigy = (=.One* dnum * * *
# local function expression for sigy = 1 / (1 + math.exp (-y))
((+ = dyOneSigy * sigy * dsigy * * *
#!

Practical programming time, need to pay attention to:

  • In the forward propagation calculation, attention is paid to the partial middle variables.: in the calculation of the reverse propagation, will once again use the former to the spread of the results of the calculation. This back propagation calculation can be greatly accelerated back.

6.1 common modes in the calculation of back propagation

Even though the structures of the neural networks are different from that of the neural networks, in most cases, the gradient calculation in the backward calculation can be classified into several common patterns. For example, the most common three simple operations (plus, multiplication, the largest), they are very simple and direct role in the reverse propagation operations. Let's take a look at this simple neural network:


In 3 cases

There are three kinds of doors we mentioned above, Max, add and multiply.

  • Add operation gateIn the reverse propagation operation, no matter how much the input value is, the gradient (gradient) which is returned by the output is obtained, and then evenly distributed to the two input path. Because the addition operation of partial derivation is +1.0.
  • Max (maximum) gateUnlike addition door, in the calculation of back propagation, it will only return to a gradient from the input path. Because max (x, y) only to the large number of X and y, the bias is +1.0, and the other a number of partial derivation is 0.
  • Multiplicative gateIt is better understood, because the X on the x*y bias for y, while the bias on the Y is x, so the gradient in the figure x is -8.0, that is, -4.0*2.0

Because the return gradient neural network is very sensitive to the input. We take the multiplication gate for example, if the inputXIAll into the original 1000 times, and the weights w constant, then in backpropagation computation, constant return gradient obtained x path, and gradient of w will larger 1000 times, which makes you had to reduce the learning rate (learning rate) to the original 1 / 1000 to maintain balance.Therefore, in many neural networks, the input data preprocessing is also very important.

6.2 vector gradient operations

All the parts above are done on the function of single variable processing and computing, we actually in processing a lot of data (such as image data), dimensions are relatively high. This time, we need to the single variable function of backpropagation extended to vector gradient operation, need to pay particular attention to is operational matrix for each matrix dimension, as well as the transpose operation.

We extend forward and reverse propagation operations by simple matrix operations, and the sample code is as follows:

# propagation operation
W = np.random.randn (Five,Ten)
X = np.random.randn (Ten,Three)
D = W.dot (X)

# if we now get back to the gradient of dD on D
DD = np.random.randn (*D.shape)# and D dimension
DW = dD.dot (X.T)#.T operation calculation transpose, dW gradient on the W path
DX = W.T.dot (dD)#dX gradient on the X path

7 Summary

Intuitively, the chain rule of back propagation can be seen as graphic derivation.
Finally, we use a photo to illustrate the actual optimization, forward propagation and reverse error propagation process:


One
Two
Three
Four
Five
Six
Seven
Eight
Nine
Ten
Eleven
Twelve
Thirteen
Fourteen
Fifteen
Sixteen
Seventeen
Eighteen

Author: yaoqiang2011 published in 17:17:19 2016/1/16Text link
Read: 20456 comments: 0View comments
]]>
Splash performance optimization of the Android page should be designed like this Http://prog3.com/sbdm/blog/u010687392/article/details/50525697 Http://prog3.com/sbdm/blog/u010687392/article/details/50525697 U010687392 17:12:04 2016/1/16 Current SplashActivity design

At present, the application of the market in the start will be the first to start a SplashActivity, as a welcome interface, why such a design?
Personal summary has three advantages:

1, you can give the user a better experience

For example: can be changed by the background of the dynamic picture, or welcome XXX back, Sina Weibo is this interaction.

2, can reduce the start time of App

From a blog know app start time is before drawing application initialization and mainactivity interface, because of the complexity of the certainly compared to only show a picture of the interface, so adding a display a picture of the splash page can optimize the application start mainactivity business and layout.

3, you can do more at the start of the application

Generally speaking SplashActivity usually designed to stay 2 to the 4S range, or the residence time of the dynamic setting splash interface according to the degree of data loading, since stay so long then of course behind the interface to do some things to prepare mainactivity fast display, such as data pre loading, SP initialization, network requests.

Of course, you may have some questions, so the initialization of the Application can also be ah? Also use asynchronous operation data is the same ah?

The answer is not the same! As the previous article said, application initialization and does not load the interface, but after the completion of it have been created and initialized, began to create activity began drawing theme background and draw the layout, so the splash pages with a lightweight to set a background Huan Ying figure so immediately can display interface, and in this interface can also do other initialization operation, so the visual reached app quick start, and add the experience and data initialization.

Conversely, if too much on the Application, then click on the app icon will feel the delay, must be done in the Application in the configuration and rendering Activity.

At present, most applications of the Splash page design deficiencies

Most of the current application of the splash page design using an activity, named SplashActivity, and add a background image in the SplashActivity, then to new (handler).PostDelayed) (seconds, to startactivity jumps into the main interface, this design looks very good, both can be in SplashActivity initialization, pre loading data, but also can improve the application startup.

But it does raise the application startup. After all, we fast compared to see the first frame, SplashActivity. But after the SplashActivity, also need to tune mainactivity ah, although mainactivity some data can be in SplashActivity do prefetching, but the need to transfer process of intent and mainactivity layout is not yet loaded in, so still need to load and draw the layout of the interface, and then to fill in the data. So, it seems, the jump in to mainactivity, still need to do the interface of drawing and according to the loading number (including the intent of data transfer).

In the past, the design of SplashActivity

It appears that the above design process can be expressed in this way:
Write the picture here.

Design of Splash pages with excellent performance and experience

From the above design point of view, some of the operations can not be removed? Both to achieve the improvement of the app start speed, but also to the data pre load can be subtracted from the Splash and MainActivity unnecessary data transfer and View separate drawing.

The answer is to, since SplashActivity and mainactivity separate operation is still not perfect, then you can consider them combined together, namely: a beginning or display mainactivity, SplashActivity variable SplashFragment, then put a FrameLayout as root layout to display SplashFragment interface so the display of SplashFragment when using display 2-4 seconds between the time gap do network request to load data, this is to be SplashFragment end display and remove, seeing this is content mainactivity, do not have to wait for the network request to return data.
Of course, this is the way to load splash view and ContentView combined with the loading, which may affect the application startup time, then we can viewstub delay loading mainactivity some view to subtract the influence.

Following design:
Write the picture here.

Before and after optimization effect comparison

Here in order to test, I put the delay page of the Splash time is set to 2.5s.
Before optimization:
Write the picture here.
After optimization:
Write the picture here.
After optimization is actually the SplashActivity with fragment to display, the display after remove, so when the display, mainactivity can also load the network data directly, so after the show finished SplashFragment is directly display page a, and eliminates the need for the progressbar progress of the network loading process.
Code

    PrivateMHandler Handler =NewHandler ();
/ /...
    FinalSplashFragment SplashFragment =NewSplashFragment ();
    FinalTransaction FragmentTransaction =.BeginTransaction () getFragmentManager ();
Transaction.replace (R.id.frame, splashFragment);
Transaction.commit ();
/ /...
MHandler.postDelayed (NewDelayRunnable (ThisSplashFragment, mProgressBar,Two thousand and five hundred);
/ /...
    Static Class DelayRunnable Implements Runnable{
        PrivateContextRef WeakReference<Context>;
        PrivateFragmentRef WeakReference<SplashFragment>;
        PrivateProgressBarRef WeakReference<ProgressBar>;

        PublicDelayRunnable (context Context, splashFragment SplashFragment, progressBar ProgressBar) {
ContextRef =NewWeakReference<Context> (context);
FragmentRef =NewWeakReference<SplashFragment> (splashFragment);
ProgressBarRef =NewWeakReference<ProgressBar> (progressBar);
}

@Override
        Public VoidRun () {
ProgressBar ProgressBar = progressBarRef.get ();
            IfProgressBar! =.Null)
ProgressBar.setVisibility (View.GONE);
Context Activity = (Activity) contextRef.get ();
            IfContext! =.Null{)
SplashFragment SplashFragment = fragmentRef.get ();
                If(splashFragment = =Null)
                    Return;
                FinalTransaction FragmentTransaction =.BeginTransaction () context.getFragmentManager ();
Transaction.remove (splashFragment);
Transaction.commit ();
}
}
}

@Override
    Protected VoidOnDestroy () {
        Super.onDestroy ();
MHandler.removeCallbacksAndMessages (Null);
}

Where FrameLayout as the root of the MainActivity layout is used as the SplashFragment full screen display.
In order to be better, you can consider the ViewStub, in the SplashFragment display to load additional View.

On the coupling, in fact, very low, Splash pages have a dedicated SplashFragment to configure, and MainActivity just control it's loaded with remove.

Author: u010687392 published in 17:12:04 2016/1/16Text link
Read: 64 comments: 1View comments
]]>
Node.js learning: the use of express to build a simple web calculator Http://prog3.com/sbdm/blog/jdh99/article/details/50528712 Http://prog3.com/sbdm/blog/jdh99/article/details/50528712 Jdh99 17:05:15 2016/1/16 Node.js learning: the use of express to build a simple web calculator


This article blog link:Http://prog3.com/sbdm/blog/jdh99, the author: JDH, reproduced please indicate.

 

Environmental Science

Host: WIN10


Express installation:

1 install express-generator

Input command: install -g express-generator NPM

2 install Express

Input command: install -g express NPM

3 verify that the installation is successful

Enter command: -V Express

View help:E--help Xpress


Building project:

-e calculator Express

CD calculator and NPM install


Run the default page:

Input command: start node or./bin/www NPM

Port configuration in /bin/www.



Simple web calculator effect:

Additive operation can be performed.


Source code:

View/index.ejs: add input box

<! Html> DOCTYPE
<html>
<head>
<标题> % =标题% > < /标题>
<链接rel= 'stylesheet的href =“/深圳/博客/样式/风格。CSS”/>
< /磁头>
<身体>
<表单方法=“后”>
<P>计算器</P >
<输入类型“文本”name=“num1“价值= < %= NUMA %> /> <br />
<输入类型“文本”name=“num2”价值=<%=麻木% > /> <br />
<输入type=“提交”value=“计算”/>
<P>结果:<%=% > < / P >和
< /形式>
< /主体>
< / HTML >

JS:对提交的数据进行计算并推送结果路线/指标。

VaR表示=需要('express”);
VaR router()路由器=表达;

/ *首页。*
路由器。得到(“/”,功能(REQ,RES,下){
res.render('index,{
标题:“计算器V1.0通过该”,
NUMA:0,
麻木:0,
和:0
});
});

路由器。后(“/”,功能(REQ,RES){
控制台日志(“接收:”,req.body.num1,请求体。num2);
VaR和= parseFloat(REQ。身体。num1)+ parseFloat(REQ。身体。num2);
控制台日志('sum =,总和);
	
res.render('index,{
标题:“计算器V1.0通过该”,
/ / NUMA:req.body.num1,
/ /麻木:req.body.num2
NUMA:req.body.num1,
麻木:req.body.num2,
和:总和
});
});
	
module.exports =路由器;




作者:jdh99发表于2016 / 1 / 16 17:05:15原文链接
阅读:56评论:0查看评论
]]>
详解希尔伯特空间--图像处理中的数学原理详解23 http://prog3.com/sbdm/blog/baimafujinji/article/details/50528565 http://prog3.com/sbdm/blog/baimafujinji/article/details/50528565 baimafujinji 2016 / 1 / 16 16:38:16

欢迎关注我的博客专栏”图像处理中的数学原理详解

全文目录请见图像处理中的数学原理详解(总纲)

http://prog3.com/sbdm/blog/baimafujinji/article/details/48467225

图像处理中的数学原理详解(已发布的部分链接整理)

http://prog3.com/sbdm/blog/baimafujinji/article/details/48751037

交流学习可加图像处理研究学习QQ群(五亿二千九百五十四万九千三百二十


有段时间没继续更新我的”图像处理中的数学原理详解“专栏了。因为前面基础的部分已经发布的差不多了,现在已经进入”深水区”。一方面现在文章的长度都有所增加,所以我写起来就更加麻烦了。另一方面,现在的话题进入了微分方程和泛函分析领域,这部分内容对于非数学专业的人来说实在太难了:(。而且如果是初学图像处理的人,也就是还在琢磨高斯平滑和中值滤波的人,基本上也用不到这些东西。所以如果你属于这种情况,我不建议你看本文。还有,如果你的目的是写一个类似魔术家的程序(HTTP:/ / prog3。COM /深圳/博客/ baimafujinji /文章/细节/ 50500757),那么你也不用学这么深而且研究下面这些话题的人,更适合用MATLAB。

我的意见是那些每天要看纸的图像处理研究者来学微分方程和泛函分析。如果你每天都要看一些满篇公式的图像处理纸,却不懂L1 L2范数、不懂希尔伯特空间、不懂泊松方程或者格林函数法范数和,我真的无法想象你得有多痛苦。当然这部分内容同样是以我前面发布的文章为基础的,比如你会再次看到我在傅立叶变换里讲过的帕塞瓦尔等式,希望你还记得这是什么东西:)


2.3.6希尔伯特空间


定义:在由内积所定义的范数意义下完备的内积空间称为希尔伯特(希尔伯特)空间
希尔伯特空间是一类性质非常好的线性赋范空间,在工程上有着非常广泛的应用,而且在希尔伯特空间中最佳逼近问题可以得到比较完满的解决。



作者:baimafujinji发表于2016 / 1 / 16 16:38:16原文链接
阅读:1696评论:0查看评论
]]>
UI组件之adapterview及其子类(三)旋转控件详解 http://prog3.com/sbdm/blog/tuke_tuke/article/details/50528500 http://prog3.com/sbdm/blog/tuke_tuke/article/details/50528500 tuke_tuke 2016 / 1 / 16 16:34:05 提供了从一个数据集合中快速选择一项值的办法默认情况下飞旋显示的是当前选择的值飞旋,Click Spinner will pop up a dropdown menu or a dialog dialog box that contains all the options, from which you can select a new value for the Spinner.

In this article I will discuss

The basic usage of 1.Spinner

XML properties of 2.Spinner

3 set Adapter's Spinner (antries properties, arrayadapter and custom BaseAdapter)


The most simple Sipnner usage is the use of Android spinner:Antries attributes directly using the arrays array of resources, display a drop-down list

<? Version= XML "1" encoding= "UTF-8"?
Xmlns:android= http://schemas.android.com/apk/res/android "<LinearLayout"
Match_parent "android:layout_width="
Match_parent "android:layout_height="
Vertical "android:orientation=" >
    
<! -- the spinner value provided by entries -- >

<Spinner
@+id/spinner1 "android:id="
Match_parent "android:layout_width="
Wrap_content "android:layout_height="
200dp "android:dropDownWidth="
@array/province "android:entries="
Android:prompt= "@string/promp" / >
</LinearLayout>
Where android:entries= "@array/province" means that the Spinner data collection is an array of resources from theProvinceAcquired in,ProvinceArray resources are defined in values/arrays.xml:

<? Version= XML "1" encoding= "UTF-8"?
<resources>
Name= province "<string-array" >
<item > Hunan </item>
<item > Hubei </item>
<item > Beijing city </item>
<item > Shanghai city </item>
</string-array>
    
</resources>


Of course, in general, we are required to respond to the Spinner selection event, you can achieve through the OnItemSelectedListener callback method

Class MainActivity extends Activity public {
 
@Override
Void onCreate protected (savedInstanceState Bundle) {
Super.onCreate (savedInstanceState);
SetContentView (R.layout.activity_main);
Spinner = (Spinner) findViewById (R.id.spinner1) Spinner ();
Spinner.setOnItemSelectedListener (OnItemSelectedListener new) {
@Override
Void onItemSelected AdapterView< (public? > parent, view View,
POS int, ID long) {
            
Languages String[] =.GetStringArray () getResources (R.array.languages);
Toast.makeText (MainActivity.this, "you click:" +province[pos], 2000).Show () ();
}
@Override
Void onNothingSelected AdapterView< (public? > parent) {
Interface callback / / Another
}
});
}
  
}

XML properties of 2.Spinner

,

Android:entries: Directly in the XML layout file in the binding data source (can not be set, that can be dynamically bound in Activity)

Android:prompt:In the Spinner pop-up selection dialog box (android:prompt= "journey to the west"), the title of the dialog box:


Android:spinnerMode:Spinner display form, its value is only "dialog" and "dropdown" two, dialog boxes and drop-down list of the form

Android:dropDownHorizontalOffset(setDropDownHorizontalOffset (int)): spinnerMode= "dropdown" when the drop in the project selection window in the horizontal direction relative to the Spinner window offset

Android:dropDownVerticalOffset(setDropDownVerticalOffset (int)): spinnerMode= "dropdown" when the drop in the project selection window in the vertical direction relative to the Spinner window offset. You can also refer to a resource (Format: @[package:]type:name) or a subject attribute that contains the value of this type

Android:dropDownSelector:Display effects of list selector for setting dropdown "spinnerMode=". It can be used "@[+][package]: type:name" format to refer to additional resources, or is "? [package:][type:]name" format to application theme attributes, but also "#rgb", "#argb", "#rrggbb", "aarrggbb" format color value

Android:dropDownWidth:In dropdown "spinnerMode=", set the width of the drop-down box.

This property can be a floating point size value of a unit, such as: 14.5sp. Effective units include: PX (pixels), DP (density independent pixels), SP (based on the size of the font size to scale pixels), in (Ying Cun), mm (mm)

Can also be one of the following constants:
Fill_parent = -1, the width of the drop-down box should be set with the width of the screen. This constant is discarded from the Level API 8, and is replaced by the mach_parent constant.
Mach_parent = -1, the width of the drop-down box should be set with the width of the screen. Is introduced in Level API 8.
Wrap_content = -2, the width of the drop-down box should be adapted to the content.

Android:gravity:This property is used to set the alignment of the currently selected items.

Android:popupBackground:In dropdown "spinner=", use this property to set the background of the drop-down list. You can use "@[+][package:]type:name" format to refer to additional resources, or use "? [package:][type:]name" format to should be used the theme property, you can also use #rgb ", #argb," #rrggbb "," #aarrggbb "format color.


3, ARrayadapter set SpInner adapter to provide a list of items

The following provides two Spinner, the first to use the "drop-down list", the android:entries attribute provides an array of second using the "dialog" form, using the ArrayAdapter to provide the adapter

Main.xml

<? Version= XML "1" encoding= "UTF-8"?
Xmlns:android= http://schemas.android.com/apk/res/android "<LinearLayout"
Match_parent "android:layout_width="
Match_parent "android:layout_height="
Vertical "android:orientation=" >
    
<! -- the spinner value provided by entries -- >

<Spinner
@+id/spinner1 "android:id="
Match_parent "android:layout_width="
Wrap_content "android:layout_height="
200dp "android:dropDownWidth="
@array/province "android:entries="
>

Spinner adapter <! -- this value is provided, android:spinnerMode= "dialog" drop-down list is in the form of a dialog box -- >

<Spinner
@+id/spinner2 "android:id="
Match_parent "android:layout_width="
Wrap_content "android:layout_height="
#f00 "android:popupBackground="
Dialog "android:spinnerMode="
Android:prompt= "@string/promp" / >

</LinearLayout>
MainActivity.java

Class MainActivity extends Activity public {

@Override
Void onCreate protected (savedInstanceState Bundle) {
Super.onCreate (savedInstanceState);
SetContentView (R.layout.activity_main);
/ / Spinner component for the spinner layout file in
Sp= findViewById (Spinner) Spinner (R.id.spinner2);
Arr={String[] "Tang Seng", "Sun Wukong", "pig eight quit", "sand monk";
/ / create a adapter object
Aa=new ArrayAdapter<String> ArrayAdapter<String> (this, android.R.layout.simple_list_item_1, arr);
Sp.setAdapter (AA);
               
}


This is the standard use of the Spinner method, which, there are two lines of code can determine the appearance of Spinner:

Aa=new ArrayAdapter<String> ArrayAdapter<String> (this, android.R.layout.simple_list_item_1, arr);
The second parameter is the default style of Spinner when the Spinner is not on the menu, and the android.R.layout.simple_spinner_item is the built-in layout of the system.

4, custom Adapter to create Spinner

This situation applies to spinner more complex situations, such as with icon.
Here we define a Spinner for a selected contact.

Main.xml

<LinearLayout
Fill_parent "android:layout_width="
80dip "android:layout_height="
Vertical "android:orientation=" >
      
<Spinner
@+id/spinner2 "android:id="
Wrap_content "android:layout_width="
Wrap_content "android:layout_height="
>
</LinearLayout>
Person.java

Com.example.spinnerdemo package;
  
Class Person public {
String personName private;
String personAddress private;
Person public (personName String, personAddress String) {
Super ();
This.personName = personName;
This.personAddress = personAddress;
}
String getPersonName public () {
PersonName return;
}
Void setPersonName public (personName String) {
This.personName = personName;
}
String getPersonAddress public () {
PersonAddress return;
}
Void setPersonAddress public (personAddress String) {
This.personAddress = personAddress;
}
  
}


Custom MyAdapter.java

Com.example.spinnerdemo package;
 
Java.util.List import;
Android.content.Context import;
进口android.view.layoutinflater;
进口android.view.view;
进口android.view.viewgroup;
进口android.widget.baseadapter;
进口android.widget.imageview;
进口android.widget.textview;
  
* * * *
*自定义适配器类
* @作者jiangqq < a href = HTTP:/ / prog3。COM /深圳/博客/ jiangqq781931404 > </a>
*
*
公共课myadapter延伸baseadapter {
<人>思想私人列表;
民营背景mcontext;
  
公共myadapter(上下文pContext,<人>列表plist){
this.mcontext = pContext;
this.mlist =属性列表;
}
  
“重写”
公共getcount() { int
思想size()返回;
}
  
“重写”
公共对象getitem(int的位置){
思想得到回报(位置);
}
  
“重写”
公共长getitemid(int的位置){
回流位置;
}
* * * *
*下面是重要代码,每一项的布局是两个文本框,当然也可以加其他组件,这个就很丰富了
*
“重写”
公众视野getview(int的位置,查看convertview,ViewGroup母){
LayoutInflater _layoutinflater = LayoutInflater。从(mcontext);
convertview = _layoutinflater膨胀(r.layout.item_custom,null);
如果(convertview!=空){
这个ImageView =(图片)convertview findViewById(r.id.image);
ImageView。setimageresource(r.drawable。ic_launcher);
TextView _textview1 =(TextView)convertview findViewById(r.id.textview1);
TextView _textview2 =(TextView)convertview findViewById(r.id.textview2);
_textview1。setText(思想,得到(位置)。getpersonname());
_textview2。setText(思想,得到(位置)。getpersonaddress());
}
返回convertview;
}
}
mainactivity.java

/ /初始化控件
旋转spinner2 =(转)findViewById(r.id.spinner2);
/ /建立数据源
<人>列表人=新的ArrayList <人>();
人。加(新的人(“张三”、“上海”));
人。加(新的人(“李四”、“上海”));
人。加(新的人(“王五”、“北京”));
人。加(新的人(“赵六”、“广州”));
/ /建立适配器绑定数据源
myadapter _myadapter =新myadapter(这个人);
/ /绑定适配器
spinner2 setadapter(_myadapter);




作者:tuke_tuke发表于2016 / 1 / 16 16:34:05原文链接
阅读:73评论:0查看评论
]]>
创建一个管理员的会话(实现守望者) http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528612 http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528612 zhouchaoqiang 2016 / 1 / 16 16:21:17 在先前的章节中,我们利用zkcli去了解了一下基本的管理员的操作,在接下来的章节中,我们将会学习一下在应用中是如何利用管理员的API的,接下来我们将利用一个程序展示一下,如何来创建一个回话和监视。那么下面我们将开始一个主从模式的结构例子。


创建一个管理员的会话


如下面所示,每一个建立的会话一旦它的连接被破坏,将会转移到其他的管理员服务,只要会话保持通畅,那么句柄将会有效。那么管理员客户端类库将会经历的保持连接。如果句柄关闭了,那么动物园管理员服务端终止会话客户端的类库会告诉。如果管理员了解到客户端已经死掉了,它将会验证会话如果以后客户端向再次恢复这个会话,将会通过这个句柄来验证一个会话的有效性。



动物园管理员的构造函数如下所示

动物园管理员(

字符串connectstring,

国际sessiontimeout,

守望者守望者)

connectstring:包含了管理员服务端的主机名和端口号,

sessiontimeout:会话的超时时间,是以毫秒为单位的

Watcher: when we receive a session event, we need to create an object. Because watch is an interface, we need to implement the interface, so as to complete the initialization of the zookeeper constructor. The client needs to monitor the session state of zookeeper. When a client establishes a connection or loses a connection, it creates the event, which can also be used to monitor changes in zookeeper data. Finally, if the session expires, the event can also be monitored, and ultimately through the client.

To achieve a surveillance

In order to be able to inform the client, we need to implement a monitoring. The interface information is shown below

Interface Watcher public {

Process void (WatchedEventevent);

}

 

Implementation of a watcher

/ * *
* master.java @FileName:
* @Package:com.test
* TODO @Description:
* LUCKY @author:
* 7:54:58 PM @date:2016 January 15th
* V1.0 @version
* /
Com.test package;

Java.io.IOException import;

Org.apache.zookeeper.WatchedEvent import;
Org.apache.zookeeper.Watcher import;
Org.apache.zookeeper.ZooKeeper import;

/ * *
* Master @ClassName:
* @Description: to achieve a watcher of maste
* LUCKY @author:
* 7:54:58 PM @date:2016 January 15th
* /
Class master implements Watcher public {

ZK ZooKeeper;
HostPort String;

/ * *
*
* /
Master public (hostPort String) {
This.hostPort = hostPort;
}

StartZk throws () IOException void {
ZK = ZooKeeper new (hostPort, 15000, this);
}

Void process public (event WatchedEvent) {
System.out.println (event);
}

StopZk throws () Exception void {
Zk.close ();
}

Static void main public throws (args String[]) Exception {
M master = master new ("100.66.162.90:2180");
M.startZk ();

Thread.sleep (60000);
M.stopZk ();
}
}


The example above is a simple implementation of the master water class, you can try to connect, look at the console print information


Author: ZHOUCHAOQIANG published in 16:21:17 2016/1/16Text link
Read: 89 comments: 0View comments
]]>
Kicked off the big change (bottom): the framework of distributed computing and big data Http://prog3.com/sbdm/blog/bluecloudmatrix/article/details/50525225 Http://prog3.com/sbdm/blog/bluecloudmatrix/article/details/50525225 BlueCloudMatrix 16:20:00 2016/1/16 Tachyon

Tachyon profile

PASA big data lab, Nanjing University

SPARK/TACHYON: memory based distributed storage system


On Yarn Spark

The whole process of building on yarn spark cluster

On Yarn Spark

On YARN Spark cluster installation and deployment


Hadoop compiler error

I was in the JAVA IBM environment to compile the hadoop. Lists errors in the compilation process and the solution, for your reference.

1)Antrun

To execute goal Failed
Org.apache.maven.plugins:maven-antrun-plugin:1.6:run (create-testdirs)

Http://stackoverflow.com/questions/17126213/building-hadoop-with-maven-failed-to-execute-goal-org-apache-maven-pluginsma

Chown-RUsernameParent-directory
(such as chown-RRoot./)
Install MVN-DskipTests

2)Failed with JVM IBM JAVA on TestSecureLogins Build

Com.sun.security.auth.module does not exist package

Https://issues.apache.org/jira/browse/HADOOP-11783

This is specifically for the JAVA IBM environment in the fight patch.


3)After the above two fix if the show soon build success, and in (hypothesis to download the source code folder name for hadoop-release-2.7.1) list of hadoop-release-2.7.1/hadoop-dist/target/ no name for the hadoop-2.7.1.tar.gz tar package, that did not compile successfully, return to the hadoop-release-2.7.1 the root directory, continue to implement:

Package MVN-Pdist -DskipTests -Dtar

Http://www.iteblog.com/archives/897

The compile time was longer, you spent in this thrilling time:)

Author: BlueCloudMatrix published in 16:20:00 2016/1/16Text link
Read: 87 comments: 0View comments
]]>
Android application startup process Http://prog3.com/sbdm/blog/cuiran/article/details/50528516 Http://prog3.com/sbdm/blog/cuiran/article/details/50528516 Cuiran 15:59:24 2016/1/16 The subject of an application in the Android system is made up of ActivityThread. But it comes to a lot of details such as the ActivityThread is created by who and what time to create? It and system services, such as ActivityManagerService, WindowManagerService, and what is the link? All this needs to be understood.

There are usually two ways to start up in the system.

  • Click on the Launcher to start the application icon
This startup mode is mostly initiated by the user, by default APK application in the Launcher main interface will have an icon, you can start by clicking the application specified Activity
  • Start by startActivity
This way of starting is usually found in the internal source, such as in the startActivity through the Activity2 to start Activity1

The two ways of the process is basically the same, and ultimately will be completed by calling startActivity ActivityManagerService. The whole process is shown in figure:


If all goes well, the AMS will try to start the specified activity, activity in the life cycle of in addition to oncreate and onresume and onpause onstop etc. and the onpause at this time was called - because of the provisions of the system, before starting the new activity, originally in the resumed state of activity will be pause. This mode of management is much simpler compared to the Windows's multi window system, and it can meet the general requirements of mobile devices. The Activity is set to pause mainly through the Activity of the progress of the ApplicationThread.schedulePauseActivity method to complete the. ApplicationThread is an Binder channel for the application process to AMS.

When you receive the ActivityThread, the pause main thread of this process will do further processing. In addition to our familiar call Activity.onPause (), it is also required to notify the WindowManagerService of this change. If the Activity is about to start the process does not exist, then AMS also need to start it up, this step by the Process.start to achieve.

    

Author: Cuiran published in 15:59:24 2016/1/16Text link
Read: 118 comments: 0View comments
]]>
Android dynamic loading and hook data summary Http://prog3.com/sbdm/blog/cuiran/article/details/50528457 Http://prog3.com/sbdm/blog/cuiran/article/details/50528457 Cuiran 15:46:10 2016/1/16 Java Hook Android

Http://www.52pojie.cn/thread-288128-2-1.html

Http://www.52pojie.cn/thread-426890-1-2.html


Apkreinforce

Http://prog3.com/sbdm/blog/jiangwei0910410003/article/details/48415225


AndroidAutomatic packing procedure

Http://www.jizhuomi.com/android/environment/281.html


360 AndroidPlug-in unit(DroidPlugin)No need to install and runAPKThe principle is what, use what?

Http://www.zhihu.com/question/35138070



Http://www.kanxue.com/bbs/forumdisplay.php? F=161?




AndroidThe plug in development----Dynamic loadingActivity (Installation free running program)

Http://prog3.com/sbdm/blog/jiangwei0910410003/article/details/48104455



Remote service

Http://www.2cto.com/kf/201402/276822.html



Activity AndroidLearning notes- ActivityStart and create

Http://www.cnblogs.com/bastard/archive/2012/04/07/2436262.html


Android-MasterActivity

Http://www.imooc.com/learn/413


AndroidsystemActivityWindow start process

Http://www.tuicool.com/articles/yQRrUv


AndroidApplication(APP)Start detailed process

Http://laokaddk.blog.51cto.com/368606/1206822


AndroidPlug in Dynamic upgrade


Http://www.trinea.cn/android/android-plugin/

Https://github.com/cayden/Android-Plugin-Framework





AndroidPlug in development, early into the hall

Http://my.oschina.net/kymjs/blog/327232



AndroidSource code analysis of application startup process


Http://prog3.com/sbdm/blog/luoshengyang/article/details/6689748

Finally, the old posts inside the summary: 

 Initiate new within the applicationActivityThe process to perform a lot of steps, but the whole point of view, mainly divided into the following four stages:

       OneStep 1 - Step 10Application programMainActivityadoptBinderNotification of inter process communication mechanismActivityManagerService, it's going to start a new oneActivity;

       TwoStep 11 - Step 15:ActivityManagerServiceadoptBinderNotification of inter process communication mechanismMainActivityGet intoPausedState;

       ThreeStep 16 - Step 22:MainActivityadoptBinderNotification of inter process communication mechanismActivityManagerServiceAnd it's ready to goPausedState, soActivityManagerServiceBe ready for theMainActivityStart a new task in the process and taskActivityThe;

       FourStep 23 - Step 29:ActivityManagerServiceadoptBinderNotification of inter process communication mechanismMainActivityWhereActivityThread, now everything is ready, it can really performActivityStart operation.



Contrast

  • DynamicLoadApk
    Migration costs are very heavy: the need to useThatAnd not "This"AllActivityAll need to inherit fromAvtivity proxy(Avtivity proxyResponsible for managing allActivityLife cycle).
    Could not startApkInsideActivity.
    Not supportedServiceandBroadcastReceiver. At the beginning of the article, in order to become a plug-inApk, is to meet certain conditions, as follows is the use of this mechanism to develop plug-insApkThe standards required to follow:
  1. CautionThis(except for the interface): becauseThisPointing to the current object, that isApkInActivityBut because ofActivityIs not in the ordinary sense.ActivitySo, soThisThere is no meaning, but ifThisRepresents an interface instead of aContext, for exampleActivityAnd an interface, thenThisContinue to be effective.
  2. UseThatSinceThisCan not be used, it is usedThat,ThatyesApkinActivityBase classBaseActivityOne of the members of theApkInstallation run time pointThis, in the absence of an installation, to the agent in the host program.Activity,Anyway,Is better than this that.
  3. ActivityMember method call problem: in principle, need to passThatTo call a member method, but because most of theAPIHas been rewritten, so just for the partAPIJust need to passThatTo call with. Simultaneously,ApkAfter installation, you can still run normally.
  4. Start newActivityConstraints: Start ExternalActivityNot limited to startApkInsideActivityThere are restrictions, first of all because ofApkInActivityNot registered, so do not support implicit calls, followed by the need toBaseActivityA new method defined inStartActivityByProxyandStartActivityForResultByProxyAnd there is no support.LaunchMode.
  5. Currently temporarily not supportedService,BroadcastReceiverSuch as the need to register to use the components, but the radio can be used to dynamically register code.
  • AndroidDynamicLoader
    Relocation costs are very heavy:
    Use resources to useMyResources.getResource (Me.class)Instead ofContext.getResources ()
    UseFragmentAsUIContainers, all of which are used for each pageFragmentInstead ofActivity, need to useMapping URLIn order to achieve the page jump.
  • Android-pluginmgr
    Without the production environmentAppTest.
    Not supportedServiceandBroadcastReceiver.
  • DroidPluginFromQihooThree hundred and sixty
    Very interesting frame!DroidPluginBe able to in oneAppStart a not installedApp. This feature may be more suitableThree hundred and sixtyBecause of the safety of the product.AppAnd hostAppHave no connection to each other, cannot support resources and code calls.
    Custom push bar is not supported.


Author: Cuiran published in 15:46:10 2016/1/16Text link
Read: 116 comments: 0View comments
]]>
New features of the network brought by the 4.4 version of the Linux kernel Http://prog3.com/sbdm/blog/dog250/article/details/50528426 Http://prog3.com/sbdm/blog/dog250/article/details/50528426 Dog250 15:42:22 2016/1/16

TCP listener Lockless

Starting with the syncookie TCP, if you can use the syncookie mechanism that how good, but not because it will lose a lot of options to negotiate information, the information on the performance of TCP is very important. TCP syncookie is mainly to prevent the semi join SYN Flood attacks, super multi node sends a large number of syn packets, then on the matter, and attacked the protocol stack received a syn will establish a request, bound in the listener for the syn request queue. This will consume a lot of memory.
But think carefully, put aside the option negotiations do not say, just for the TCP syn, synack TCP, in fact in the 3 handshake, only need to look at the Listener, as long as it exists, can be directly based on syn packet construction synack packets, there was no Listener, remember the 2 handshake packet information, there are two ways, the first way is encode and echo back to the syncookie mechanism, after the handshake and 3 ACK, TCP decode the ACK serial number information, construct socket, insert the Listener accept queue, and a method is in the local memory allocation, the connection record client information after 3 times, the arrival of the ACK handshake packet, find the request, construct socket, insert the Listener accept queue.
Before 4.4, a request is belong to a listener, is a listener has a request queue, each construct a request to the Listner, but 4.4 kernel gives the breakthrough of method, is based on the request to construct a new socket! Inserted into the global socket hash table, the socket only records a lightweight reference to its Listener. Until after the three handshake packet ack arrival, socket hash lookup table, find the will no longer be Listnener but with the syn packet arrival to construct the new sockets, such traditional following logic can be the listener liberate:
Traditional TCP protocol stack
Sk = lookup (SKB);
Lock_sk (SK);
If (is Listener SK); then
Process_handshake (SK, SKB);
Else
Process_data (SKB);
Endif
Unlock_sk (SK);
It can be seen that, during the lock SK, will be a bottleneck, and all the handshake logic will be handled during the lock period. 4.4 kernel changed all this,The following is a new logic:
Sk = lookup_form_global (SKB);
If (is Listener SK); then
RV = process_syn (SKB);
New_sk = build_synack_sk (SKB, RV);
New_sk.listener = sk;
New_sk.state = SYNRECV;
Insert_sk_into_global (SK);
Send_synack (SKB);
Done goto;
Else if (sk.state = = SYNRECV); then
Listener = sk.lister;
Child_sk = build_child_sk (SKB, SK);
Remove_sk_from_global (SK);
Add_sk_into_acceptq (listener, child_sk);
Fi
Lock_sk (SK);
Process_data (SKB);
Unlock_sk (SK);
Done:

This logic, only the fine grain lock specific queue can be, do not need to lock the entire socket. For syncookie logic is more simple, even socket SYNRECV are not constructed, as long as there is Listener to ensure that!
It is Thursday morning squat toilet when suddenly see 4.4 new characteristics, was shocked, which is I in 2014 happens to come to mind, but later because there is no environment there is no follow up, now and in the mainline, have to say this is a good thing. At that time my idea is in accordance with a syn packet should be completely disregarded Listner constructed synack, consultation of information can be stored in other places without having to Listner binding that can liberate the listener's duties. But I did not expect to construct a socket, with all the socket parallel inserted into the same socket hash table.
I think, 4.4 before the logic is clear and simple, regardless of the handshake packets and data packets, processing logic is exactly the same, but 4.4 complex code, separating the if else so much... But this is inevitable. As a matter of fact, syn tectonic request itself should be bound with the listener, just if the thought of optimization, the code becomes complicated, but if in the code itself in a lot of effort, your code will be very nice. I just don't have the ability, the code I write.
The idea of this Lockless is similar to nf_conntrack's thought, but I think conntrack can also play this game for conn related logic.

Listener CPU TCP affinity and REUSEPORT

Accept queue optimization with TCP Listener Lockless! As we all know, a Listener only a accept queue, in the multi nuclear environment, this single queue is definitely a bottleneck, how can a high performance server!
In fact, this problem has long been solved by REUSEPORT. REUSEPORT allows multiple independent socket at the same time listening to the same IP/Port on, which for today's multi queue network card, multi CPU environment is definitely a gospel. However, although the range, multi lane, no rules, performance decreased, but the degree of congestion!
4.4 the kernel for the socket introduced a so SO_INCOMING_CPU CPU option if a socket of the option is set to N, which means only in N, CPU processing the execution of the protocol stack logic flow to the packet is inserted into the socket. Embodied in the code, is to give extra points on the compute_score, that is, in addition to the target IP, target port, source IP, source port, CPU has also become a matching project.
As patch explained, this feature is combined with REUSEPORT, multi queue network card, must be a delicious food!

New stream based multi path routing

Previous time, route cache, a routing cache is with a source of information n tuple information, each data packet in matching to fib entries will create a cache, the subsequent search first to cache lookup, therefore is based on flow. However, after the route cache, after class, multi path routing into a packet based, which for TCP this agreement will certainly cause problems in disorder. For this 4.4 kernel in the multi path routing, hash calculation of the introduction of the source information, to avoid this problem. As long as the calculation method is not the same, always a stream of data hash to a dst.

Socket routing cache with number version

This is not a feature of the 4.4 kernel, is my own some of the ideas. Early_demux has been introduced into the kernel, designed to eliminate the machine into the flow of the route to find, after all, after the route to find and then socket search, why not directly socket to find it? Lookup results cache routing information. Open this option for the device that provides services to this machine.
But for the flow, or there will be a lot of waste in the routing of the search. Although IP is not connected, but socket TCP or a UDP socket connected can clearly indicate a 5 tuple, if the routing information stored in the socket, is not better. All right! Many people will ask, how to solve the problem of synchronization, the routing table changed how to do, to socket notify? If you are guided to design an efficient synchronization protocol, you lose! Approach is very simple, that is, the introduction of two counter - cache counters and global counters, socket route cache as follows:
Sk_rt_cache {
Version atomic_t;
*dst dst_entry;
};
The global counter is as follows:
Gversion atomic_t;
Whenever the socket set the route cache, read the value of the global gversion, set the cache version, whenever the route changes, the global gversion counter increments. If the value of the cache counter is in line with the global counter value, it is available, otherwise, it is not available. Of course, the DST itself is protected by the reference count.
Author: dog250 published in 15:42:22 2016/1/16Text link
Read: 121 comments: 0View comments
]]>
AdapterView component of the UI and its sub class (two) GridView grid view of the use of Http://prog3.com/sbdm/blog/tuke_tuke/article/details/50528329 Http://prog3.com/sbdm/blog/tuke_tuke/article/details/50528329 Tuke_tuke 15:40:39 2016/1/16 GridView grid view properties:


Android:numColumns= "auto_fit", the number of columns is set to automatic, can be determined numerically
Android:columnWidth= "90dp" - the width of each column is the width of Item
Android:verticalSpacing= "10dp" - the cell vertical margin
Android:horizontalSpacing= "10dp", the level of cell margins

Android:stretchMode------ set stretching mode


None: no stretch

SpacingWidth: zoom and width size synchronization

ColumnWidth: zoom and width size synchronization

GridView by row, the form of the column to the organization to display multiple components, ListView is a special case of GridView. Use adpater to use the same method as ListView

1, the preparation of data sources
2, the new adapter
3, load adapter
GridView (grid view is in the ranks of the way to display the content, it is generally used to display pictures, pictures and other content, such as the realization of Jiugong trellis, the GridView is preferred, is also the most simple

MainActivity.java

Com.example.testgridview package;

Java.util.ArrayList import;
Java.util.HashMap import;
Java.util.List import;
Java.util.Map import;

Android.app.Activity import;
Android.os.Bundle import;
Android.widget.GridView import;
Android.widget.SimpleAdapter import;

Class MainActivity extends Activity public {
GridView gview private;
List<Map<String Object>>, data_list private;
SimpleAdapter sim_adapter private;
/ /图片封装为一个数组
私人int []图标= { r.drawable.address_book,r.drawable.calendar,
r.drawable.camera,r.drawable.clock,r.drawable.games_control,
r.drawable.messenger,r.drawable.ringtone,r.drawable.settings,
r.drawable.speech_balloon,r.drawable.weather,r.drawable.world,
r.drawable.youtube };
private String [] iconname = {“通讯录”、“日历”、“照相机”、“时钟”、“游戏”、“短信”、“铃声”,
“设置”、“语音”、“天气”、“浏览器”、“视频”};

“重写”
protected void onCreate(Bundle savedinstancestate){
超级onCreate(savedinstancestate);
setContentView(r.layout。试验);
gview =(GridView)findViewById(r.id.gview);
/ /新建列表
data_list =新的ArrayList <地图<字符串,对象> >();
/ /获取数据
getdata();
/ /新建适配器
从= {“图像”,“文本”};
int [] = { r.id.image,r.id.text };
sim_adapter =新SimpleAdapter(这data_list,r.layout.item,,到);
/ /配置适配器
gview setadapter(sim_adapter);
}

    
    
公示名单<地图<字符串,对象> > getdata() {
/ /接穗和iconname的长度是相同的,这里任选其一都可以
为(int i = 0;i <图标。长度;i++){
图<字符串对象>地图=新的HashMap <字符串对象>();
图。(“图像”,图标[我]);
地图。把(“文本”,iconname [我]);
data_list。添加(图);
}
            
返回data_list;
}
    

}
test.xml

<?xml version=“1”encoding=“utf-8”?>
< LinearLayout xmlns:Android =“http://schemas.android.com/apk/res/android”
安卓:layout_width =“match_parent”
安卓:layout_height =“match_parent”
安卓:方向=“垂直”
安卓:背景=“# 000”
>
    
< GridView
安卓:ID =“”+身份证/ gview”
安卓:layout_width =“match_parent”
安卓:layout_height =“wrap_content”
安卓:numcolumns =“auto_fit”
安卓:ColumnWidth =“80dp”
安卓:stretchmode =“ColumnWidth”
> < / GridView >
< /元素>
item.xml

<?xml version=“1”encoding=“utf-8”?>
< LinearLayout xmlns:Android =“http://schemas.android.com/apk/res/android”
安卓:layout_width =“wrap_content”
安卓:layout_height =“wrap_content”
安卓:方向=“垂直”
安卓:重力=“中心”
安卓:填充=“10dp”
>
    
    
<图片
安卓:src=“@冲/ ic_launcher”
安卓:身份证=“@ +身份证/形象”
安卓:layout_width =“60dp”
安卓:layout_height =“60dp”
    
/ >

< TextView
安卓:身份证=“@ +身份证/文本”
安卓:layout_margintop =“5DP”
安卓:layout_width =“wrap_content”
安卓:layout_height =“wrap_content”
安卓:文字颜色=“# ffffff”
安卓:文本=“文字”
/ >
< /元素>


另外一个:

main.xml

<?xml version=“1”encoding=“utf-8”?>
< LinearLayout xmlns:Android =“http://schemas.android.com/apk/res/android”
安卓:身份证=“@ +身份证/根”
安卓:layout_width =“match_parent”
安卓:layout_height =“match_parent”
安卓:方向=“垂直”
<!——GridView组件的几个新属性,Android的numcolumns =“4”该网格包含4列,包含多少行是动态改变的,由适配器决定-->
< GridView
安卓:ID =“”+身份证/查询”
安卓:layout_width =“match_parent”
安卓:layout_height =“wrap_content”
安卓:重力=“中心”
安卓:horizontalspacing =“1DP”
安卓:verticalspacing =“1DP”
安卓:numcolumns =“4”>
< / GridView >
<!——ImageView组件-->
<图片
安卓:ID =“”+身份证/ imageview1”
安卓:layout_width =“240dp”
安卓:layout_height =“240dp”
安卓:layout_gravity =“center_horizontal”
安卓:src=“@冲/ bomb5”/>

< /元素>
cell.xml

<?xml version=“1”encoding=“utf-8”?>
< LinearLayout xmlns:Android =“http://schemas.android.com/apk/res/android”
安卓:方向=“水平”
安卓:layout_width =“fill_parent”
安卓:layout_height =“fill_parent”
安卓:重力=“center_horizontal”
安卓:填充=“2”
>
<图片
安卓:ID =“”+身份证/送入”
安卓:layout_width =“50dp”
安卓:layout_height =“50dp”
/ >
< /元素>

mainactivity.java

包com.example.gridviewtest;

进口java.util.arraylist;
进口java.util.hashmap;
进口java.util.list;
进口java.util.map;

进口android.app.activity;
进口android.os.bundle;
进口android.view.menu;
进口android.view.menuitem;
进口android.view.view;
进口android.widget.adapterview;
进口android.widget.adapterview.onitemclicklistener;
进口android.widget.gridview;
进口android.widget.imageview;
进口android.widget.simpleadapter;

公开课的主要活动延伸活动{
	
/ *
* GridView按行,列的形式来组织显示多个组件,ListView是GridView的特例使用使用方法与ListView一样适配器。
*
* * *
GridView网格;
这个ImageView;
/ /图片的ID
int [] imagesids =新国际[ ] {
r.drawable。bomb5,r.drawable。bomb6,r.drawable bomb7,r.drawable.bomb8,
r.drawable。bomb9,r.drawable。bomb10,r.drawable bomb11,r.drawable.bomb12,
r.drawable。bomb13,r.drawable。bomb14,r.drawable bomb15,r.drawable.bomb16。
};

“重写”
protected void onCreate(Bundle savedinstancestate){
超级onCreate(savedinstancestate);
setContentView(r.layout。activity_main);
        
ImageView =(ImageView)findViewById(r.id.imageview1);
/ /创建一个列表对象,独享的元素是地图列表
< <字符串列表图,对象> >的ListItem =新< <字符串ArrayList的地图,对象> >();
为(int i = 0;i < imagesids。长度;i++){
图<字符串对象>项目=新的HashMap <字符串对象>();
项目。把(“图像”,imagesids [我]);
ListItem添加(项目);
}
/ /使用自定义的布局XML定义SimpleAdapter细胞。
SimpleAdapter广告=新SimpleAdapter(这r.layout ListItem,细胞,新的字符串[ ] {“图像”},新国际[ ] { r.id.image1 });
        
格=(GridView)findViewById(r.id.gridview1);
/ /设置适配器
网格setadapter(AD);
/ /添加列表项被单击时的监听器
网格。setonitemclicklistener(新onitemclicklistener() {

“重写”
公共无效onitemclick(adapterview <?父视图视图,
int的位置,长的ID){
/ /待办事项自动生成方法存根
ImageView。setimageresource(imagesids [位置]);
}
});
        
}

“重写”
公共布尔onCreateOptionsMenu(菜单){
如果它是存在的话,这将增加行动栏的项目。
getmenuinflater()膨胀(r.menu.main,菜单);
返回真;
}

“重写”
公共布尔onOptionsItemSelected(菜单项){
在这里,处理动作栏项目。行动吧
/ / / /自动处理在家/上按钮,那么长
/ /为你指定在AndroidManifest.xml家长活动。
int id =项目。getitemid();
如果(ID = = r.id.action_settings){
返回真;
}
返回超级onOptionsItemSelected(项目);
}
}





作者:tuke_tuke发表于2016 / 1 / 16 15:40:39原文链接
阅读:103评论:0查看评论
]]>
Linux内核中网络数据包的接收-第二部分选择/投票/ epoll http://prog3.com/sbdm/blog/dog250/article/details/50528373 http://prog3.com/sbdm/blog/dog250/article/details/50528373 dog250 2016 / 1 / 16 15:36:56

Linux 2.6 +内核的唤醒回调机制

Linux内核通过睡眠队列来组织所有等待某个事件的任务,而唤醒机制则可以异步唤醒整个睡眠队列上的任务,每一个睡眠队列上的节点都拥有一个回调,唤醒逻辑在唤醒睡眠队列时,会遍历该队列链表上的每一个节点,调用每一个节点的回调,如果遍历过程中遇到某个节点是排他节点,则终止遍历,不再继续遍历后面的节点。总体上的逻辑可以用下面的伪代码表示:

睡眠等待

定义sleep_list;
定义wait_entry;
wait_entry任务= current_task;
wait_entry.callback = func1;
如果(something_not_ready);然后
#进入阻塞路径
add_entry_to_list(wait_entry,sleep_list);
继续走:
schedule();
如果(something_not_ready);然后
去go_on;
endif
del_entry_from_list(wait_entry,sleep_list);
endif
…

唤醒机制

something_ready;
for_each(sleep_list)作为wait_entry;做
wait_entry。回调(…);
如果(wait_entry。排除);然后
打破;
endif
完成

我们只需要狠狠地关注这个回调机制,它能做的事真的不止选择/投票/ epoll,Linux的AIO也是它来做的,注册了回调,你几乎可以让一个阻塞路径在被唤醒的时候做任何事情。一般而言,一个回调里面都是以下的逻辑:
common_callback_func(…)
{
do_something_private;
wakeup_common;
}

其中,do_something_private是wait_entry自己的自定义逻辑,而wakeup_common则是公共逻辑,旨在将该wait_entry的任务加入到CPU的就绪任务队列,然后让CPU去调度它。
现在留个思考,如果实现select/poll,应该在wait_entry的回调上做什么文章呢?
.....

选择/投票的逻辑

要知道,在大多数情况下,要高效处理网络数据,一个任务一般会批量处理多个插座,哪个来了数据就去读那个,这就意味着要公平对待所有这些插座,插座的你不可能阻塞在任何”数据读”上,也就是说你不能在阻塞模式下针对任何插座调用recv / recvfrom,这就是多路复用插座的实质性需求。
假设有N个插座被同一个任务处理,怎么完成多路复用逻辑呢?很显然,我们要等待”数据可读”这个事件,而不是去等待”实际的数据”!!我们要阻塞在事件上,该事件就是“N个插座中有一个或多个插座上有数据可读”,也就是说,只要这个阻塞解除,就意味着一定有数据可读,意味着接下来调用recv / recvform一定不会阻塞!另一方面,这个任务要同时排入所有这些插座的sleep_list上,期待任意一个插座只要有数据可读,都可以唤醒该任务。
那么,select/poll这类多路复用模型的设计就显而易见了。
的设计非常简单select/poll,为每一个插座引入一个调查例程,该历程对于”数据可读”的判断如下:
poll()
{
…
如果(接收队列不为空){
电动汽车| = poll_in;
}
…
}

当任务调用select/poll的时候,如果没有数据可读,任务会阻塞,此时它已经排入了所有N个插座的sleep_list,只要有一个插座来了数据,这个任务就会被唤醒,接下来的事情就是
for_each_n_socket SK;做
event.evt = SK。民意调查(…);
event.sk = SK;
put_event_to_user;
完成

可见,只要有一个插座有数据可读,整个N个插座就会被遍历一遍调用一遍调查函数,看看有没有数据可读,事实上,当阻塞在select/poll的任务被唤醒的时候,它根本不知道具体插座插座插座有数据可读,它只知道这些中至少有一个有数据可读,因此它需要遍历一遍,以示求证,遍历完成后,用户态任务可以根据返回的结果集来对有事件发生的插座进行读操作。
可见,select/poll非常原始,如果有100000个插座(夸张吗?),有一个插座可读,那么系统不得不遍历一遍…因此选择只限制了最多可以复用1024个插座,并且在Linux上这是宏控制的。select/poll只是朴素地实现了插座的多路复用,根本不适合大容量网络服务器的处理场景。其瓶颈在于,不能随着插座的增多而战时扩展性。

epoll对wait_entry回调的利用

既然一个wait_entry的回调可以做任意事,那么能否让其做的比select/poll场景下的wakeup_common更多呢?
To this end, epoll prepared a list, called ready_list, all in the socket ready_list, there is an event, for data reading is concerned, there is indeed a data readable. Wait_entry callback epoll to do is, will be their own to join the ready_list to go, waiting for the return of epoll_wait, you only need to traverse ready_list. Epoll_wait sleep in a separate queue (single_epoll_waitlist), rather than on the socket sleep queue.
And select different is that use epoll task does not require sleep queue and all discharged into the multiplexed socket, the socket has its own queue, task only need to sleep in their separate queue waiting for the event can be, wait entry of each socket of the callback logic is:
Epoll_wakecallback (...)
{
Add_this_socket_to_ready_list;
Wakeup_single_epoll_waitlist;
}
Therefore, the epoll needs an additional call, that epoll Ctrl add, a socket to join the epoll table. It mainly provides a wakeup callback, the socket is assigned to a epoll entry, and initialize the wait entry callback epoll Epoll_wakecallback. The entire epoll_wait as well as the protocol stack of wakeup logic is shown below:
Protocol stack wake socket sleep queue
1 packets are discharged into the receiving queue of socket;;
2 wake up the sleeping SOCKET queue, which calls the various callback wait_entry;
3.callback will own this socket to join ready_list;
4 wake up epoll_wait sleep in a separate queue.
Since then, epoll_wait continued to move forward, traverse the ready_list inside every poll socket process, collect events. This process is routine, because this is essential, ready list inside each socket data readable, can not do without hard. This is and select the essential difference between (select, even if there is no data to read, also want to traversing all over again).
To sum up, epoll logic to do the following routines:

Add epoll logic

Wait_entry define
Wait_entry.socket = this_socket;
Wait_entry.callback = epoll_wakecallback;
Add_entry_to_list (wait_entry, this_socket.sleep_list);


Wait epoll logic

Single_wait_list define
Single_wait_entry define
Single_wait_entry.callback = wakeup_common;
Single_wait_entry.task = current_task;
If (ready_list_is_empty); then
# into a blocked path
Add_entry_to_list (single_wait_entry, single_wait_list);
Go on:
Schedule ();
If (sready_list_is_empty); then
Go_on goto;
Endif
Del_entry_from_list (single_wait_entry, single_wait_list);
Endif
As SK do; for_each_ready_list
Event.evt = sk.poll (...);
Event.sk = sk;
Put_event_to_user;
Done;

Epoll wake-up logic

Add_this_socket_to_ready_list;
Wakeup_single_wait_list;

Above, you can give the following on the epoll flow chart, you can compare the first part of this paper, the flow chart to do a comparison




Can be seen, epoll and select the essential difference is in at the time of the incident, a epoll per item (socket) have their separate a wakeup callback, and to select, only one! This means that epoll, a socket event, you can call its independent callback to deal with its own. From a macro perspective, epoll's high efficiency lies in the separation of the two types of sleep waiting,A sleep epoll wait and wait for it to is "arbitrary a socket incident", namely epoll wait call to return the condition, it is not suitable for directly to sleep in the socket queue of sleep, if you really want it, in the end who sleep? After all, so much socket... So it's just sleeping on its own. A socket's sleep queue must be related only to its own, so another class of sleep is waiting for each socket itself, it can sleep on its own queue.


ET epoll and LT

It is time to mention ET and LT, the biggest controversy lies in which performance is high, rather than in the end how to use. All kinds of documents that ET efficient, but in fact, it is not so, for practical purposes, LT efficient at the same time, more secure. What is the difference between the two?

Conceptual distinction

ET: only when the state changes, will notice, such as data buffer to from scratch (not readable), if there is data in the buffer, it will not have been notified;
LT: as long as there is data in the buffer, it will be notified.
Check a lot of information, the answer is nothing more than similar to the above, but if you look at the implementation of Linux, but people are more confused ET. What is the state of change? For example, the data receiving buffer inside a one-time 10 packets, compared to the flow chart, it is clear that the wakeup will call the 10 operation, does not mean that the socket should be added to ready_list 10? Certainly not such, the second data packet wakeup callback call arrival. It is found that the socket has in the ready list, certainly not plus, this time the epoll wait back users to read the one packet, assume that the program has a bug, then no longer read, this buffer and nine packets, the question, at this time if the protocol stack and then discharged into a bag, in the end is the notification or fail to notice it? In accordance with the understanding of the concept, without notice, because this is not a "change of state", but in fact on Linux you to try found is will notice, because as long as the package is discharged into the socket queue, it will trigger the wakeup callback will be socket into the ready list, for et, before returning to epoll wait, socket has from the ready list removed. Therefore, if in patterns of ET, you find procedures for occlusion in epoll wait, and not conclude said certain packet is not received after a causes, may also is the packet does not end, but if this a new data packet, epoll wait will return, although it did not bring the buffer to an edge of change of state.
Therefore, the changes in the buffer status, can not simply be understood as there is and not so simple, but the arrival of the packet and does not come.
ET and LT is the concept of interruption, if you put the arrival of the packet, that is, into the socket receiving queue this thing to understand as an interrupt event, the so-called edge trigger is not the concept?

Implementation difference

On the logic of the realization of the code, the difference between ET and lt is lt once an incident will have been added to the ready list, until a poll will be removed from the, then in the detection to the events of interest to the added to the ready list. By the poll routine to determine whether there is an event, not entirely dependent on callback wakeup, which is the true meaning of the poll, that is, constant polling! That is to say, the LT model is completely polling, each will go on a poll, until the poll to events of interest, will rest. At this point, we only packet arrival can to rely on the wakeup callback will be added to the ready list the. In the implementation, from the following code can be seen in the difference between the two.
Epoll_wait
As entry do; for_each_ready_list_item
Remove_from_ready_list (entry);
Event = entry.poll (...);
If (event) then
Put_user;
If (LT) then
The following conclusions # a poll result
Add_entry_to_ready_list (entry);
Endif
Endif
Done


Difference in performance

The performance difference is mainly reflected in the data structure and algorithm, for epoll, wakeup is the main chain operations and callback operations for ET, wakeup callback socket will be added to the ready_list, and for LT, in addition to wakeup callback to add socket to ready_list, epoll_wait can also be to the next time poll is added to ready_list wakeup callback, but there is less work, but it is not fundamental difference in performance, the fundamental difference lies in the performance of list traversal, if there is a mass of socket using LT mode, because each incident will once again to join the ready_list, so even if the socket has no the event, or use a poll to confirm that this extra time for the event free socket meaningless traversal on ET No. But be careful, traversing the list of performance consumption only in the long list will reflect, do you think a thousand or so the socket will reflect the LT disadvantage? Admittedly, ET does reduce the number of times the data can be read, but it does not have an overwhelming advantage.
LT is more easy to use than ET, and is not easy to deadlock, it is recommended to use LT to normal programming, rather than using ET to occasional Hyun technology.

Difference in programming

Epoll ET in blocking mode, does not recognize the queue empty events, which just blocking in separate recv a socket and not all on the socket epoll wait call is being monitored, although don't affect the code of the operation, as long as the socket has data arrival of good, but will affect programming logic, which means to disarm multiplexing, cause a large number of socket hunger, even with data, also cannot read the. Of course, for LT, there are similar problems, but LT will be radically feedback data readable, so the event will not be easy because of your programming errors and be discarded.
For LT, because it will continue as long as the feedback data, what time do you want to read what you can read, it is forever "the next poll" the opportunity to take the initiative to ascertain whether or not the data can be read, even using blocking mode, but not across the block boundary made into other socket hungry, how to read data can, but for ET, it is in the notice of your application data can be read, while the new arrival data will notice, but you cannot control the new data will come and what time, so you must read all the data in order to leave, read all the time means you must be able to to ascertain the data is empty, so that is to say, you must use the non blocking mode, until the return error EAGIN.

Several ET modes of tips are given.

1 queue buffer size, including the length of the SKB structure itself, about 230
2.ET mode, wakeup callback socket will join the ready_list number > = number of data packets received, so
Multiple data reported enough fast enough to reach the wakeup callback epoll may only trigger a success callback, this time will only be added to socket ready_list once
The resulting queue is full
The subsequent messages cannot be added
The effect of cork
= > can fill buffer residual hole tabloid paper can trigger patterns of ET epoll wait to return, if the minimum length is 1, then can return to send a zero length packages to lure epoll wait
= > but due to SKB the size of the structure is the inherent size above the lure of success is not guaranteed.
3.epoll surprised group, you can refer to the experience of NGX
4.epoll can also draw on NaPi interrupt program, until the recv routine returns EAGIN or an error occurred, epoll wakeup callback is no longer called, which means if the buffer is not empty, even if the new packets do not notice.
A. as long as the epoll wakeup callback socket is called, cut off the follow-up notice;
B.Recv routine to return to EAGIN or error, the start of follow-up notice.
Author: dog250 published in 15:36:56 2016/1/16Text link
Read: 106 comments: 0View comments
]]>
The reception of network packets in the Linux kernel - the first part of the concept and framework Http://prog3.com/sbdm/blog/dog250/article/details/50528280 Http://prog3.com/sbdm/blog/dog250/article/details/50528280 Dog250 15:26:51 2016/1/16 1 packet arrival notice
2 receive notification and get data from the data package
These two events occurred at both ends of the protocol stack, that is, the network card / protocol stack boundary and the protocol stack / application boundary:
Network cardProtocol stack boundary:Network card notification packet arrival, the interrupt protocol stack collection;
Stack stack / application boundary:The protocol stack fills the socket queue with the data packets, and informs the application that it has data to read, and that the application is responsible for receiving the data.
This article on the two boundaries of the two things is how a detail, about the network card interrupt, NAPI, poll, select/poll/epoll, and other details, and assume that you already know about these.

Event on the network interface card / protocol stack

NIC in the packet arrival time will trigger the interrupt and protocol stack knew packet arrival events, then how much do you charge for the package depends entirely on the stack itself, this is the network card interrupt handler task, of course, can also do not use interrupt mode, but uses a separate thread continuous polling card whether the arrival of a packet, but this way too CPU intensive, and done many useless, so basically abandoned, such asynchronous events, using interrupt program to inform. Integration of the whole collection of logic, can be divided into the following two ways
A. each packet arrival that is interrupted by CPU, the CPU scheduling interrupt handler to receive packet processing, packet logic is divided into the upper and lower parts, the core of the protocol stack processing logic in the next half completed.
B. packet arrival and interrupt the CPU and CPU scheduling interrupt handler and close the interrupt response, scheduling under half polling the network card, the received packet is completed or reach a threshold, re opening the interrupt.
The way a in the data packet continues to quickly arrive at the time will cause a lot of performance damage, so this case is generally used B, which is the way NAPI Linux.

A card / protocol stack to the boundary of the occurrence of events, do not want to say more, because this involves many details of hardware, for example, you in NaPi Shimonoseki after an interruption, the internal network card is to cache packets, while taking into account the multi-core processor is not can be a NIC receives the packet interrupt to different CPU core it? Then it relates to the problem of multi queue card, and these are not an ordinary kernel programmers can handle, you want to know more things and related manufacturers, such as Intel specifications, and a variety of people see the halo of the handbook.


Protocol stack /socket boundary event

Therefore, to make it easier to understand, I decided to in another boundary, namely protocol stack stack / application boundary to describe the same thing, and these are the basic kernel programmers even application programmers interested in the field, in order to make the discussion later easier, I will this protocol stack stack / rename application boundary for the protocol stack /socket boundary, socket isolation the protocol stack and application program, it is an interface, for protocol stack, it can on behalf of an application and for application, it can represent protocol stack, when packets arrive, the following things happen:
1) protocol stack the data packets into the socket receive buffer queue, and notify the application to hold the socket;
2).CPU scheduling of the application of the socket, the data packet from the receiver buffer queue to take out, the package is completed.
The overall schematic diagram is as follows



Socket elements

As shown in the figure above, each of the socket's packet logic contains the following two elements

Receive queue

Protocol stack processing data packets to be discharged into the queue, the application is wake up to read data from the queue.

Sleep queue

The application associated with the socket if there is no readable data can sleep in this cohort, once the protocol stack the packet discharged into the socket's receive queue will wake up the sleep queue process or thread.

A socket lock

In the execution stream socket operations of metadata must be locking socket. Note that receive and sleep queues does not need to the lock to protect, the lock protection is similar to the socket buffer size modification, TCP in order receive such things.

This model is very simple and direct, and NIC interrupt packet arrival to deal with as a CPU to notify the network, protocol stack through this way to a notification application is readable data, before continue to discuss the details and select/poll/epoll, said first two unrelated things, not to say the, just because they are connected, so just mention only, not occupy a lot of space.

1 surprised and wake up

Similar to TCP accpet logic so for large web server, basically are multiple processes or threads to accept a listen socket at the same time, if the protocol stack will be a client socket has been discharged into the accept queue is these threads all wake up or wakes up only one? If all wake up, obviously, only one thread will grab the socket, to snatch defeat from the other thread to sleep, can be said to be in vain to wake up, this is the classical TCP shock group, resulting in a a row he wake, also is to say, only to wake up on sleep queues of the first thread, and then exit the wakeup logic, no longer wake behind a thread. This avoids the shock group.
This topic in the online discussion already voluminous, but think carefully about will find row he wake up there is still a problem, it will greatly reduce the efficiency.
Why do you say that? Because the protocol stack wake up between the operation and the application of practical accept operation is completely asynchronous, unless in the protocol stack wake applications, the application just blocking in accept, anyone can guarantee at the time of application in what. Cite a simple example, in multi-core system, protocol stack simultaneously to the multiple requests, and is precisely the multiple threads waiting on the sleep queue, if can let the protocol stack execution flow at the same time to wake up the thread should have much good, but due to a socket only an accept queue, so for the queue of exclusive wake-up mechanism basically that go back to playing, only accpet the queue discharge / remove the locking operation make the whole process a serial of the complete loss of the advantage of multi-core parallel, so REUSEPORT and based on fasttcp appears. This weekend, carefully studied the kernel Linux 4.4 version of the update, it really makes people look, I will write an article on the back of a separate article to describe

2.REUSEPORT and multi queue

Before reuseport initially in understanding to Google, I personally have done a similar patch. At that time, the idea of is from the analogy of multi queue card, a card can interrupt the CPU and a socket data readable event why can't interrupt multiple applications since it? However, API socket has long been fixed dead, which caused a blow to my mind, because a socket is a file descriptor that represents a five tuple (non UDP socket and TCP Listen except the connect! ), protocol stack of events just as a five tuple related... So in order to make the idea is feasible, only the socket API the, which is to allow up to a plurality of socket is bound to the same IP address / source port, and then follow the source IP address / port of the hash value to distinguish flow routing, the ideas I also achieved, in fact, with the multi queue card is a thought, exactly the same. Multi queue network card is not in accordance with the different five tuple (or N tuple? We do not seriously) HASH values to interrupt the different CPU core? Since has multi core era, why not in each CPU core keep a accept queue? Think this transplantation was too damn handsome, however see Google reuseport patch felt his futile, re create the wheel... Then want to solve the accept queue problem? The application of schedule system to make scheduling considering... This time no silly, so to see the FastTCP sina.
Of course, if the REUSEPORT based on the source IP/ port on the hash calculation, directly to avoid the same stream "interrupt" to the different socket of the receiving queue.

Well, the episode has been finished, and then the details of the.

Receiving queue management

Receive queue management is in fact very simple is a list of SKB, protocol stack will SKB inserted into a linked list to lock live queue itself, and then insert the SKB, then wake up the sleep queue of the socket thread, then thread locking socket receive queue of the SKB data for obtaining, is so simple.
At least in the kernel of 2.6.8 is to do so. Later versions are the basic version of the optimized version, has experienced two times the optimization.
   

Receive path optimization 1: the introduction of backlog queue

Considering the complex details, such as according to the received data to modify the socket buffer, the application calls recv routines need to lock the socket, in complex multi-core CPU, multiple applications may with a socket operation, has a plurality of protocol stack implementation flow may also will to the same socket receive buffer discharged into the skb[details of please refer to theTCP optimization of multi core Linux kernel path optimization of multi core platform - the only proper course to take", so the size of the lock is naturally a socket itself. Holding socket in the application, protocol stack due to possible in soft interrupt execution context is not waiting for sleep, in order to make the protocol stack execution flow not as a result and spin blockade, the introduction of a backlog queue, protocol stack in the application holds the socket, only need to SKB discharged into the backlog queue can be returned, then the backlog queue eventually by who to deal with?
Who is looking for things to deal with! Was originally because the application lock socket and the protocol stack to the SKB discharged into the backlog, so when the application release socket, will the SKB queue backlog inside into the receiving queue to. Simulation protocol stack SKB enqueue and wake up the operation.
After the introduction of the backlog queue, a single receive queue into a two stage of the relay queue, similar to the pipeline operation. So anyway protocol stack are not blocked waiting, protocol stack if not immediately be SKB discharged into the receiving queue, so this thing is from a socket locking themselves to complete, wait for it to give up this thing can be locked. The operation routine is as follows:
Protocol stack queue skb---
Get socket spin lock
Application to occupy socket time: the SKB into the backlog queue
Application does not occupy the socket time: SKB will be discharged into the receiving queue, wake up the receiving queue
Release socket spin lock
The application receives data ---
Get socket spin lock
Blocking occupancy socket
Release socket spin lock
Read data: because it has monopolized the socket, you can rest assured that the contents of the receiving queue SKB copy to the user state
Get socket spin lock
Backlog queue SKB into the receiving queue (this is in fact the protocol stack to be completed, but due to the application of socket has been postponed to the present), wake up the sleep queue
Release socket spin lock
Can be seen, the so-called socket lock, and is not a simple spin locks, but in different paths have different lock mode. In a word, as long as they can to ensure that the metadata of the socket are protected scheme is reasonable, so we see this is a two layer of the lock of the model.

Two layer locked lock framework

Long winded so much, in fact we can put above the sequence summed up into a more general abstraction model, in certain scenarios can be applied. Now describe the pattern.
Participants categories: NON-Sleep- not sleep class, Sleep- can sleep class
Number of participants: NON-Sleep, Sleep class more than one
Competitor: between class NON-Sleep, class Sleep, class NON-Sleep and class Sleep
Data structure:
X- locked entity
X.LOCK- spin lock, used to lock the non sleeping path and to protect the tag lock
X.FLAG- tag lock, used to lock the sleep path
Task waits for the X.sleeplist- queue to get the tag lock

Lock / unlock logic for NON-Sleep class:

Spin_lock (X.LOCK);

If (X.FLAG = = 1) {
Something todo to backlog //add
Delay_func (...);
{else} {
It directly //do
Direct_func (...);
}

Spin_unlock (X.LOCK);

Lock / unlock logic for Sleep class:

Spin_lock (X.LOCK);
{do
If (X.FLAG = = 0) {
Break;
}
For (;) {
Ready_to_wait (X.sleeplist);
Spin_unlock (X.lock);
Wait ();
Spin_lock (X.lock);
If (X.FLAG = = 0) {
Break;
}
}
} while (0);
X.FLAG = 1;
Spin_unlock (X.LOCK);

Do_something (...);

Spin_lock (X.LOCK)
If (have_delayed_work) {
{do
Fetch_delayed_work (...);
Direct_func (...);
} while (have_delayed_work);
}
X.FLAG = 0;
Wakeup (X.sleeplist);
Spin_unlock (X.LOCK);


For socket packet logic, in fact, the SKB is inserted into the receiving queue and wake up Socket sleep queue to fill the above direct_func can be, while delay_func's task is to insert the backlog into the SKB queue.
The abstracted model is the basic a two layer logical lock, spin lock in sleep path only to protect flag bit, sleep path using tag bits to lock instead of using the spin lock itself, the modification of tag bits is spin lock protection, this very fast modification operation substitute for the slow business logic processing path (such as socket is closed package...) is fully locked, thus greatly reducing the competitive state of CPU time spin overhead. In the near future I used this model in a real scene, very good, the effect is really good, so deliberately abstracted the above code.
Introduced the two layers of lock free to operations that do not sleep pathway, the in sleep path task while in possession of a socket, still can be discharged into the data packets to the backlog queue instead of waiting for sleep path task to unlock, but sometimes can sleep on the path of logic is not so slow, if it does not slow down, or even soon, lock time is very short, then it is not can be directly and not sleep pathway to compete for the spin lock? This is the opportunity to introduce a sleep path lock fast.
 

Receive path optimization 2: the introduction of lock fast

Socket processing logic in the process / thread context can be directly related to the kernel protocol stack to compete with the socket spin lock on the premise of meeting the following conditions:
The critical area of A. treatment is very small
B. currently has no other process / thread context in which the socket processing logic is dealing with this socket.
To meet the above conditions, this is a simple environment, the status of competitors. So obviously a problem is who to deal with the backlog queue. This problem actually is not a problem, because this case backlog is not the, backlog of operation must be holding a spin lock and socket during fast lock is held by the spin lock, two path after the mutex. Therefore, the above conditions a is extremely important, if there is a large delay in the critical area, it will cause the protocol stack path over spin! The new lock fast framework is as follows:

Sleep class fast lock / unlock logic:

Fast = 0;
Spin_lock (X.LOCK)
{do
If (X.FLAG = = 0) {
Fast = 0;
Break;
}
For (;) {
Ready_to_wait (X.sleeplist);
Spin_unlock (X.LOCK);
Wait ();
Spin_lock (X.LOCK);
If (X.FLAG = = 0) {
Break;
}
}
X.FLAG = 1;
Spin_unlock (X.LOCK);
} while (0);

Do_something_very_small (...);

{do
If (fast = = 1) {
Break;
}
Spin_lock (X.LOCK);
If (have_delayed_work) {
{do
Fetch_delayed_work (...);
Direct_func (...);
} while (have_delayed_work);
}
X.FLAG = 0;
Wakeup (X.sleeplist);
} while (0);
Spin_unlock (X.LOCK);


The reason why the code is so complex and not just spin_lock/spin_unlock, because if X.FLAG is 1, indicating that the socket has been processed, such as blocking waiting.

The above is the protocol stack /socket on the border of the asynchronous process queue and the overall structure of the lock, summed up, contains 5 elements:
A=socket receive queue
B=socket sleep queue
Backlog c=socket queue
Spin lock of d=socket
E=socket possession mark
The following processes are performed between the 5:


With this framework, between protocol stack and socket can be asynchronous secure of network data transfer, if you look carefully, and the wakeup mechanism of Linux 2.6 kernel has sufficient understanding and decoupling of thought, I think should be able to know select/poll/ epoll how a work mechanism. About this, I think, in the second part of this description, I think, as long as to the basic concept have sufficient understanding and can achieve mastery through a comprehensive study of the subject, a lot of things is can only rely on to derived.
Below, we can let SKB participate in the above framework.

Relay SKB

In the implementation of the protocol stack of Linux, the SKB said a packet, a SKB can belong to a socket or protocol stack, but not to both, belonging to a SKB protocol stack is associated with it doesn't any a socket, it only on the stack itself is responsible for, if a SKB belongs to a socket, it means that it has, and a socket is bound, all operations on it, is in charge of by the socket.
Linux SKB provides a destructor analysis to construct the callback function, whenever the SKB is endowed with new genus Lord will before invoking a belongs to the destructor of the Lord, and was designated a new destructor, we are more concerned about is SKB from the protocol stack to the socket of the last bar, before the SKB discharged into the socket's receive queue, invoking the following functions:
Inline void skb_set_owner_r (sk_buff *skb struct, sock *sk struct), static ()
{
Skb_orphan (SKB);
Skb->sk = sk;
Skb->destructor = sock_rfree;
Atomic_add (skb->truesize, &sk->sk_rmem_alloc);
Sk_mem_charge (SK, skb->truesize);
}

Which SKB orphan is mainly callback of a is given the SKB's destructor, then to specify a new analysis of constitutive callback function sock rfree. After the completion of the skb_set_owner_r call, the SKB will officially enter the socket receive queue:
Skb_set_owner_r (SKB, SK);

Cache the SKB length before we / tack it onto the receive
* Once it is added it no longer belongs to us and queue.
* be freed by other threads may of control pulling packets
* the queue. from
* /
Skb_len = skb->len;

Skb_queue_tail (&sk->sk_receive_queue, SKB);

If (! Sock_flag (SK, SOCK_DEAD))
Sk->sk_data_ready (SK, skb_len);


Finally, by calling the sk_data_ready to notify the socket sleep queue on the task data has been discharged into the receiving queue, in fact, is a wakeup operation, and then the stack on the return. Obviously, then on the SKB for all treatments were in process / thread context, wait until the SKB data is removed, the SKB will not return to the protocol stack, but by the process / thread self released, so in the destructor callback function sock rfree, main is to buffer space back to the system, mainly to do two things:
1 the socket has allocated the memory minus the space occupied by the SKB
Sk->sk_rmem_alloc = skb->truesize - sk->sk_rmem_alloc;
2 the socket pre allocated space plus the space occupied by the SKB
Sk->sk_forward_alloc = sk->sk_forward_alloc + skb->truesize;

Statistics and restrictions on the amount of memory used in protocol packets

Kernel protocol stack is only a subsystem of the kernel and the data were from outside of the machine, data source is not controlled, easily by DDoS attack. Therefore, it is necessary to limit a protocol of the overall memory usage, such as all the TCP connection can only be 10m memory, Linux kernel initially only for TCP statistical, later joined the limitation for the UDP statistics, reflected in the configuration of several sysctl parameters:
Net.ipv4.tcp_mem = 189782530637956
Net.ipv4.tcp_rmem = 4096873806291456
Net.ipv4.tcp_wmem = 4096163844194304
Net.ipv4.udp_mem = 189782530637956
....
The meaning of each of the above three values is as follows:
The first value mem[0]: the normal value, where the amount of memory is lower than the value, are normal;
Second value mem[1]: warning value, any higher than this value, it is about to proceed with the austerity program;
The third value mem[2]: an impassable barrier, higher than this value, show memory usage have overloaded the data to be discarded.
Note that these configuration values are in a single protocol, while the recvbuff configuration in the sockopt configuration is for a single connection buffer size limit, both are different. Kernel in dealing with the limits of agreement, in order to avoid the frequent detection, the pre allocation mechanism, for the first time even if it's just to the package of a 1, also for the overdraft a page of memory limit, there is no actual memory allocation, because the actual memory allocation in SKB generation and IP fragment reassembly time have been identified, here is just the value of cumulative, check whether exceeding the limit only, so the logic here is just an addition and subtraction, multiplication and division, in addition to the calculation process consumes CPU doesn't consume other machine resources.

The calculation method is as follows
Proto.memory_allocated: every one of the protocols, which means that the current protocol in the kernel socket buffer which has been used in a total of how much memory to store skb;
Sk.sk_forward_alloc: each one of the socket, said the current pre allocated to the memory of the amount of socket, can be used to store skb;
Skb.truesize: the size of the SKB structure itself and the sum of its data size;
SKB is about to enter the socket of the receiving queue on the eve of the accumulation routine:
OK = 0;
If (skb.truesize < sk.sk_forward_alloc) {
OK = 1;
ADDLOAD goto;
}
Pages = how_many_pages (skb.truesize);

TMP = atomic_add (proto.memory_allocated, pages*page_size);
If (TMP < mem[0]) {
OK = 1;
Normal;
}

If (TMP > mem[1]) {
OK = 2;
Tight;
}

If (TMP > mem[2]) {
Out limit;
}

If (OK = = 2) {
If (do_something (proto)) {
OK = 1;
}
}

Addload:
If (OK = = 1) {
Sk.sk_forward_alloc = skb.truesize - sk.sk_forward_alloc;
Proto.memory_allocated = tmp;
{else} {
SKB drop;
}

SKB is the sk.sk_forward_alloc extension of socket in the period of release when the destructor is called:
Sk.sk_forward_alloc = sk.sk_forward_alloc + skb.truesize;

Protocol buffer recovery period (will be in the release of the SKB or the expiration of the time to remove the SKB call):
If (sk.sk_forward_alloc > page_size) {
Pages = sk.sk_forward_alloc to adjust to the entire page number;
Prot.memory_allocated = pages*page_size - prot.memory_allocated;
}

This logic can be seen in the sk_rmem_schedule and other sk_mem_XXX functions.
The first part of this article is to end this, the second part will focus on the description of select, poll, epoll logic.
Author: dog250 published in 15:26:51 2016/1/16Text link
Read: 121 comments: 0View comments
]]>
The second chapter second section exercises 3 queue simulation ferry management Http://prog3.com/sbdm/blog/u013595419/article/details/50528271 Http://prog3.com/sbdm/blog/u013595419/article/details/50528271 U013595419 15:10:16 2016/1/16 Problem description

Car ferry, ferry crossing the river each can carry 10 cars across the river. Across the river is divided into passenger cars and trucks, on the ferry has the following provisions:

1) the same kind of car to get on board first;
2) the passenger car before the ferry boat, on each of the 4 buses, to be allowed to put on a truck;
3) if waiting for the bus is less than 4, then the truck instead of;
4) if there is no truck waiting for the bus to get on board.

Try to design an algorithm to simulate ferry management

Algorithm thought

After analysis, found that this is actually the basic operation of the queue, the only difference is in the enqueue, for sequential restrictions.

  • Use the queue Q to express each vehicle, the vehicle, the queue Qp express bus, Qt said the truck queue;
  • If the element in the Qp is sufficient, the team from the queue Qp out of the 4 elements, from the queue Qt out of the 1 elements, until the queue length of Q 10;
  • If the queue element in Qp is not sufficient, it can use the Qt element in the queue up.

Algorithm description

VoidManager () {
    If(IsEmpty (&Qp) =Zero&&car<Four{)
DeQueue (&Qp, &e);
EnQueue (&Q, e);
Car++;
Count++;
}Else If(car==Four&&IsEmpty (&Qt) =Zero{)
DeQueue (&Qt, &e);
EnQueue (&Q, e);
Car=Zero;
Count++;
}Else{
        While(count<=MaxSize&&IsEmpty (&Qt) =Zero{)
DeQueue (&Qt, &e);
EnQueue (&Q, e);
Count++;
}
}
    If((&Qt) = IsEmptyZero&&IsEmpty (&Qp) = =Zero{)
Count=Eleven;
}
}

See attachment for specific code.


Enclosure

#include<stdio.h>
MaxSize #define 10

Typedef CharElemType;
Typedef Struct{
Data[MaxSize] ElemType;
    IntFront, rear;
}SqQueue;

VoidInitQueue (SqQueue*);
VoidEnQueue (SqQueue*, ElemType);
VoidDeQueue (SqQueue*, ElemType*);
IntIsEmpty (SqQueue*);
VoidMangager (SqQueue*, SqQueue*, SqQueue*);
VoidPrintQueue (SqQueue);

Int(mainIntArgc,Char* argv[])
{
Q SqQueue;
Qp SqQueue;/ / bus
Qt SqQueue;/ / truck

InitQueue (&Q);
InitQueue (&Qp);
InitQueue (&Qt);

X= ElemType'P';
    For(IntI=ZeroI<;Six{i++);
EnQueue (&Qp, X);
}
Y= ElemType'T';
    For(IntI=ZeroI<;Six{i++);
EnQueue (&Qt, y);
}

    IntCount=Zero;
    IntCar=Zero;
E ElemType;

    Simulation / ferry
    While(count<=MaxSize) {
        If(IsEmpty (&Qp) =Zero&&car<Four{)
DeQueue (&Qp, &e);
EnQueue (&Q, e);
Car++;
Count++;
}Else If(car==Four&&IsEmpty (&Qt) =Zero{)
DeQueue (&Qt, &e);
EnQueue (&Q, e);
Car=Zero;
Count++;
}
        Else{
            While(count<=MaxSize&&IsEmpty (&Qt) =Zero{)
DeQueue (&Qt, &e);
EnQueue (&Q, e);
Count++;
}
}
        If((&Qt) = IsEmptyZero&&IsEmpty (&Qp) = =Zero)
{
Count=Eleven;
}
}

PrintQueue (Q);

    Return Zero;
}

/*---------------------------------------------------------------*/

VoidInitQueue (Q SqQueue*)
{
Q->front=Zero;
Q->rear=Zero;
}

VoidEnQueue (Q SqQueue*, X ElemType)
{
    If(Q->rear==MaxSize-One{)
        Return;
}
Q->data[Q->rear++]=x;
}

VoidDeQueue (Q SqQueue*, *x ElemType)
{
    If(Q->front==Q->rear&&Q->front==Zero{)
        Return;
}
*x=Q->data[Q->front++];
}
Author: u013595419 published in 15:10:16 2016/1/16Text link
Read: 83 comments: 0View comments
]]>
Zookeeper in the master and slave mode structure of the scene practice Http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528047 Http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528047 ZHOUCHAOQIANG 14:35:44 2016/1/16 Below this part, we will be through the zkCli tool to achieve a simple master-slave structure mode, the structure of the master and slave mode design to the following roles.


Master

Master monitors new worker and tasks, and assigns tasks to workers.

Worker

Worker is registered in the system so that master knows they can perform the task.

Client

Client is used to create a task and wait for the response of the system,

 

Role TheMaster

Since there is only one master, so can only have a process to get the master control to become master, in order to express master, we come to create a temporary node called /master


In the above operation, we created a temporary node /master, and let the node to save some of the host information, the above -e parameters show that we create a temporary node.


At this point we are on another process, the implementation of the following operation




When once more to create /master tell we already have, at this point, we on this node create a listener, stat command operation, now just shut the process, we in the current process will see the content of the following tips


The monitor tells us that the master node has been deleted.





.tasksand, assignment Workers


Before we further discuss workers and client, we create several important node information






The three nodes above are all permanent nodes and do not contain data, we want to use these nodes to tell us that the workers is available, when we want to assign jobs to them.

In practical applications, these nodes can be created by the master master, and can be created by other programs. Now I need to add a listener to these nodes.




In the above, we use the optional true parameters to add the listener, the effect is the same as the stat command.


Role TheWorker

First, worker needs to notify master that it can execute task, so we create a temporary node to represent the worker's child nodes. Note that workers creates a child node below, we can see the corresponding output information from the listener.




Next, worker needs to create a parent node /assign/worker1.example.com, and to monitor the operation of his




Worker is now starting to work, and we will discuss the role of the client.

 

 

Role of Client

Client add tasks to the system. The purpose of this example is not to have the task of being executed. Here we assume that the client allows the master slave mode system to execute a CMD command, in order to add a task to the system, the client needs to perform the following operations.




Once the task node is created, you will see the following output information




Next master check the new task, get the worker information that can be used, and assign the task




 

Immediately following the worker received the monitoring information, as follows





Worker next check Taske information



Once worker has completed the task being performed, he will add a state node to the /asks

The client receives the notification and begins to check the results.



The content of the state node is determined by the client to determine whether the success of the implementation, if executed, then the content will be done information. Of course, it may be related to other distributed systems, but whatever it is, the mechanism is the same, after all, the essence of zookeeper is the same.


Author: ZHOUCHAOQIANG published in 14:35:44 2016/1/16Text link
Read: 134 comments: 0View comments
]]>
Start building the first zookeeper Http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528014 Http://prog3.com/sbdm/blog/luckyzhoustar/article/details/50528014 ZHOUCHAOQIANG 14:24:52 2016/1/16 First need to downloadZookeeperTheTarPackage, the address isHttp://zookeeper.apache.orgAnd then againLinuxExtract and compileTarPackage.


# tar-xvzf zookeeper-3.4.5.tar.gz


If you are using theWindowOperating system, you need to find the decompression tool to extract the aboveJarPackage, and needs to be configuredJDKTheJavaDevelopment environment, requirementsJDKstayOne point sixAbove


In the extracted directory,BinThe directory contains some of the commands of the script, such as.shAt the end of theLinuxScript command executed in,.cmdAt the end of theWindowInformation to execute the commands in the script.LibIncluded in the directoryJavaIn need ofJarOperational library file


First oneZookeeper session


The following is created in a stand-alone modeZookeeperThe answer, we will make use ofZkServerandZkCliThe command tool to complete our operation. Experienced administrators also use them to debug and manageZookeeper.


Enter intoConfDirectory, renameZoo_sample.cfgfile


# mvconf/zoo_sample.cfg conf/zoo.cfg


The following operation is optional.ZookeeperData storage directoryZookeeperInstall directory isolation


DataDir=/users/me/zookeeper


Finally to startZookeeperservice


# bin/zkServer.sh start
Enabled by default JMX
Config:../conf/zoo.cfg Using
Zookeeper STARTED... Starting
#



The above command will eventually beZookeeperService running in the background, the following command will makeZookeeperRunning in the foreground program, and from the output we can also see some of the important output of information.


# bin/zkServer.shstart-foreground


The above command will allow us to see more information about the output.


Let's open a client.


#bin/zkCli.sh


Omitted output> <some
12:07:23545 [myid:] INFO - [main:ZooKeeper@438] 2012-12-06 -
Client connection Initiating, connectString=localhost:2181
Watcher=org.apache.zookeeper. sessionTimeout=30000
ZooKeeperMain$MyWatcher@2c641e9a
To ZooKeeper Welcome!
12:07:23702 [myid:] 2012-12-06 - [main-SendThread INFO
Started with ZooKeeper Getting 27 (localhost:2181): Opening - ClientCnxn$SendThread@966]
Connection to server localhost/127.0.0.1:2181. socket
Not attempt to authenticate using Will SASL (to Unable
A login configuration locate)
Support is enabled JLine
12:07:23717 [myid:] 2012-12-06 - [main-SendThread INFO
(localhost:2181): Socket - ClientCnxn$SendThread@849]
Established to localhost/127.0.0.1:2181 connection, initiating
[zk: localhost:2181 session (CONNECTING) 0]
12:07:23987 [myid:] 2012-12-06 - [main-SendThread INFO
(localhost:2181): Session - ClientCnxn$SendThread@1207]
Complete on server localhost/127.0.0.1:2181 establishment,
Sessionid = negotiated = timeout = 30000 =
WATCHER:
State:SyncConnected type:None path:null WatchedEvent



The above implementation process is

1 the client begins to establish a reply

2 the client tries to connect to 127.0.0.1:2181

3 client server connection, initialized state.

4. State initialization success

5 server sends a synchronization event to the client


Let us follow some of the commands to the operation, to play a zookeeper


Localhost:2181 0] (CONNECTED) [zk:

Localhost:2181 0] (CONNECTED) ls [zk: /

[zookeeper]

Localhost:2181 1] (CONNECTED) create/workers [zk""

/workers Created

Localhost:2181 2] (CONNECTED) ls [zk: /

[workers, zookeeper]

Localhost:2181 3] (CONNECTED) [zk:

 

Localhost:2181 3] (CONNECTED) delete/workers [zk:

Localhost:2181 4] (CONNECTED) ls [zk: /

[zookeeper]

Localhost:2181 5] (CONNECTED) quit [zk:

Quitting...

12:28:18200 [myid:] 2012-12-06 - [main-EventThread:ClientCnxn$INFO

EventThread - shut down EventThread@509]

12:28:18200 [myid:] INFO - [main:ZooKeeper@684] 2012-12-06 - Session:

Closed 0x13b6fe376cd0000

# bin/zkServer.sh stop

Enabled by default JMX

Config:../conf/zoo.cfg Using

Zookeeper STOPPED... Stopping

#


Session declaration cycle

Answer statement cycle through its start and end, the whole lifecycle state, as shown below



Zookeeper cluster model

Above is just an independent model, the following to build a cluster model. Or a host, for example, in a host to build three different zookeeper services. In order to achieve cluster mode operation.

Refer to the article:Http://www.tuicool.com/articles/iMjMvm

 

Lock in Zookeeper


In zookeeper there are many different types of locks, such as reading and writing lock, global lock and so on. And there are many ways to achieve the lock in zookeeper, we will discuss a simple, to illustrate how the application is to use the zookeeper.


Imagine that we have an application, and the application in a n processes are try to strive for the lock, taking into account the zookeeper is not direct storm drain its syntax, so we need to interface by the zookeeper to node and realize the locking mechanism. In order to get the lock, each process is trying to create a node, if you get a lock, you will create a successful. In this case, is a potential problem, process p if died, but did not release the lock, then other processes will not have the opportunity to get to lock. At the same time, the system will be in a state of deadlock, in order to avoid this situation, so we need to create a temporary node.


As long as the node exists, the other process will create a failure. Then the other process will monitor the node, the node is removed, then they will have the opportunity to get to the lock.




Author: ZHOUCHAOQIANG published in 14:24:52 2016/1/16Text link
Read: 145 comments: 0View comments
]]>